Abstract

Numerical calculation is usually employed to analyse the safety risks of tunnel construction under existing structures. However, due to the fuzziness and randomness of the construction environment, the simulation results fluctuate significantly. Given the fuzziness and randomness encountered during the safety risk assessment of tunnels undercrossing existing structures, this study analyses and summarises the previous experiences and selects a relevant algorithm of the cloud model to solve the uncertainty problem during the assessment. The study proposes three forms of cloud models to evaluate the risks of tunnels undercrossing existing structures and performs a comparative analysis between them. Finally, the effectiveness and feasibility of the cloud model calculation method are verified through a case study. The research shows that (1) the accuracy of the one-dimensional and two-dimensional cloud models was high, and the difference was small, indicating that the cloud model method used to determine the risk level of tunnels underpassing existing structures was reasonably effective, (2) since the soft ‘and’ two dimensions operated computation, the distribution probability was dispersed, making the soft ‘and’ computation more affected by the index, resulting in lower accuracy than the one-dimensional and two-dimensional cloud models, and (3) compared with numerical calculation, the cloud model calculation was more convenient, reflecting the relationship between the indices and the relationship between the indices and the complex whole. The evaluation results used the comprehensive evaluation probability to determine the risk level so that the evaluation results were reliable.

1. Introduction

In addition to the safety risks inherent during construction itself, threats can also arise from the stability of the existing structures and traffic safety. Current studies on tunnels undercrossing existing structures have mostly analysed the influence and risk of tunnel construction through numerical calculation. However, due to the fuzziness of the construction environment and the randomness of construction, the numerical calculation and simulation results are floating. Many scholars have studied the risk assessment of engineering in recent years and have mainly adopted the unascertained measure theory, neural network method, Monte Carlo method, Fisher discriminant analysis (FDA) method, and fuzzy evaluation method, among others [1].

Chen applied the unascertained measure theory for tunnel risk assessment. First, the risk factors were summarised, and the evaluation index space was established so that each evaluation object had a single evaluation index of n corresponding to it. The risk evaluation level of the object was divided, and the risk level was determined based on the comprehensive measurement of multiple indicators [2]. Huang studied the application of the fuzzy evaluation method for the risk assessment of tunnel construction. The fuzzy evaluation method is a comprehensive evaluation method based on fuzzy mathematics, which weighs various factors and quantizes some fuzzy indices to carry out a comprehensive evaluation. First, the evaluation index is set up followed by establishing the evaluation set and single-factor fuzzy judgment. Thereafter, the fuzzy mapping from the evaluation object factor is set to the evaluation set [3]. Wang studied the Monte Carlo method and evaluated the risk of cable-stayed bridge construction. The Monte Carlo method uses a distribution law-based index set to extract random numbers for repeated calculation for fuzzy evaluation, which can effectively simulate the randomness and uncertainty in practical engineering. However, due to the actual nature of probability distribution, it is unnecessary to fully fit a certain distribution law [4]. Wu et al. carried out a risk assessment study on the application of fuzzy Bayesian to tunnelling beneath existing tunnels. Bayes estimation calls for obtaining the distribution probability of the index first. According to the different distribution probabilities, the distribution probability function is first determined. The index sample value is then established, and the overall distribution function is determined by the probability density of the likelihood function at the sample value and the probability distribution function. The evaluation results are determined according to the overall distribution function. Although the method can vaguely express the results, it is sometimes difficult to determine the distribution probability function, which shows that it is not applicable to very vague evaluation [5]. Jiang performed the risk assessment of a tunnel by using the Mond method. The core idea of the Mond method is to divide the main body into sections, where each section is regarded as a unit. The material coefficient and the risk coefficient are used to quantify the risk source, and other risk factors are used to correct the total risk by means of the coefficient product to obtain the final risk level. The method’s limitation is that it cannot reflect the contribution of a single index [6]. Dong studied the application of the FDA method for tunnel construction risk assessment. The essence of the FDA method is to project high-dimensional data into low-dimensional space, determine the discriminant function, and identify the risk categories according to the principle of large distances between classes and minimum distance within classes. The method’s drawback is that its scope of application is very narrow because the overall evaluation does not reflect fluctuations in indicators [7]. The Poisson process method, essentially a counting process, is also often used for risk assessment. For the number set { (N (t), t > 0) }, there is λ > 0 so that N (0) = 0 and N (t) is an independent incremental process. In the interval of any length t, the number of events follows the Poisson distribution with the mean value of λt. The Poisson process method is characterised by a simple calculation process and a simple model. Its limitation is that an independent increment is required [8]. Liang used the backpropagation (BP) neural network to evaluate the risk of a tunnel-crossing river. The dataset was trained and tested by the MATLAB neural network toolkit. The risk value was finally obtained by comparing it with the standard [9]. Shi et al. used the BP neural network to predict and analyse blasting and used the combined method of the grey relational analysis and artificial neural network to improve the accuracy of the neural network. However, the neural network algorithm has problems, such as long training time, slow convergence, and multiple and complex input factors [10]. Huang et al. evaluated the index weight of blasting vibration by the analytic hierarchy process (AHP) and evaluated the blasting vibration through normal distribution. From the process and results, strong subjective in the subjective weight of the AHP method was observed [11]. Büyüközkan et al. adopted the hesitant fuzzy linguistic term set methodology to overcome the uncertainty-related difficulties of this multicriteria decision-making (MCDM) problem. The AHP method was implemented to find the criteria weights. Subsequently, the vital business intelligence system alternative was determined through the hesitant fuzzy linguistic complex proportional assessment method [12]. Technique for order preference by similarity to ideal solution (TOPSIS) is one of used in the MCDM issues. However, TOPSIS is often criticized for its inability to deal with vague and uncertain problems so that, without considering the inherent random uncertainty and/or imprecision of the parameter, which is unrealistic and could result in unreliable assessment [13].

To sum up, all of these risk assessment methods have their own disadvantages; for example, some methods are highly subjective for experts in assessing construction risks so that they cannot reflect the fuzziness and randomness of engineering well. Compared with the other methods, the cloud model possesses the following advantages. (1) It omits complex mathematical derivation and calculation processes; thus, people who are not into in-depth mathematical research can also understand and master the process, which greatly increases its popularity and appeal. (2) The process is simple and fast, and project risk assessment calculations take less time, saving money. (3) The use of relatively flexible for different evaluation subjects using different forms of evaluation is offered. Meng et al. [14] point out that the vast majority of cloud models currently used are one-dimensional cloud models, and further research is needed to apply two-dimensional cloud models or other forms of cloud models to construct risk assessment. In this study, the cloud model method was used to evaluate the risk of tunnels undercrossing existing structures and compare the representation forms of the cloud model to evaluate tunnels undercrossing existing structures.

The weight determination methods mainly include the AHP method, Delphi method, statistical analysis method, deviation method, and entropy method [15]. The AHP method can determine weights according to practical problems, but its subjectivity is large. The Delphi method can be applied either in the presence or absence of data, but has been currently used less. Statistical analysis is based on the law of history. Its disadvantage is that it cannot respond to the influence of conditions. The deviation method is similar to the entropy method and can be used when the index fluctuation is very small, causing the limitations to greatly increase. Thus, the AHP method is the most widely used one at this stage. To solve the problem of subjectivity in the AHP method, scholars have carried out plenty of research. Some studies have combined the CRITIC method with the AHP method and demonstrated its rationality [16].

2. Preliminaries

Professor Deyi proposed the cloud model in 1995 by considering fuzziness and randomness and realised the conversion between the qualitative concept and quantitative expression [17]. Let X={X} be the quantitative domain of numerical representation. A is the qualitative concept on X; if, for any element X, there is A stable tendency of random number μÃ(x) called the membership degree of X to Ã, then the membership degree distribution of concept à in [0, 1] becomes the membership cloud μ(x), where each x is a cloud droplet [18]:

The concept of cloud representation represents quantitative characteristics using the expected value Ex, entropy En, and hyperentropy He which are basic characteristics. Most of the evaluation objectives are subject to normal distribution. The one-dimensional cloud model under normal distribution is shown in Figure 1.

In the cloud model, the expected value Ex is the central value of the domain and the representative sample after the qualitative concepts and quantification, which is reflected in the cloud as the highest point of the cloud [19]. Entropy En is the measurable range of the qualitative concepts, reflecting the margin of these and the fuzzy concepts, determined by the randomness and fuzziness of the concepts [20]. It reflects the span of the clouds. The hyperentropy He is the entropy and a measure of uncertainty [21]. Hyperentropy associates fuzziness and randomness, reflecting the degree of dispersion of the cloud droplets, i.e., the thickness of the cloud [22, 23], and extends the normal distribution to the generalised normal distribution [24]. The one-dimensional expectation curve determined by Ex and En is expressed as follows:

The cloud model realises qualitative and quantitative mutual conversion through the cloud generator, which includes the forward cloud generator and reverse cloud generator [25]. After the sample data are obtained, a reverse cloud generator is used to convert the exact values into digital eigenvalues (Ex, En, and He), reflecting the overall characteristics.

2.1. Calculate the Sample Mean

Sample variance is

2.2. Eigenvalue Calculation

For the one-dimensional cloud model, we can intuitively see that the cloud model considers the entire evaluation object as a random probability event and reflects the possible results through the simulation of cloud droplets. However, the risk of the tunnel undercrossing the existing structure is comprehensively controlled by the risk of the existing structure and the risk posed by the tunnel itself. The one-dimensional cloud model can dilute the index weight or the important influencing factors, not reflected during the evaluation to avoid weight dilution when assigning the indicators. The indicators can be further divided. The indices can be divided into two groups, and the weights can be calculated by the AHP-CRITIC method. Although the weights are calculated separately in two groups, it does not mean that there is no influence between the two groups. The correlation coefficient between the two indicators can reflect the effect between the two groups. Using the index to evaluate the weight after grouping, the architecture of the one-dimensional cloud model is used for evaluation, which is equivalent to performing an additive Boolean operation on the two groups of indicators. This is a form of the cloud model soft ‘and..

The cloud model soft ‘and’ and the one-dimensional expectation curve determined by Ex and En is the same as the one-dimensional cloud model, as shown in formula (2). Equations (3)–(5) are also used to find the Ex, En, and He eigenvalues.

The Boolean computation of cloud models can simplify the multidimensional clouds to many one-dimensional cloud models. When dealing with the double condition and single rule, the soft ‘and’ can be regarded as a qualitative concept, equivalent to the logical ‘and’ operation, in which the two dimensions of the domain, respectively, correspond to the certainty of [0,1]:

The simplified one-dimensional cloud can be calculated as follows.

Input sample point Ci, where i = 1,2, …, m, is

The digital characteristics of the soft ‘and’ degree need to be determined according to the research problem. In the process of adjustment, since there is no fixed formula or theorem to regulate the parameter value, expertise from subject matter experts is needed to determine the parameter value in practice [26, 27].

To reflect the overall situation more realistically, (9) is improved according to the weights of the two dimensions:

If the weights are assigned to each group, the indicators do not return to the architecture of the one-dimensional cloud model but are expressed separately. This makes the distinction between the indicators more obvious, rendering indicator sensitivity that clearly affects the two risk sources. This results in a two-dimensional cloud model.

Similar to (2), we can obtain the two-dimensional expectation curve as follows:

The digital eigenvalues of the cloud are [Exx, Enx, Hex, Exy, Eny, Hey], which can be calculated from equation (5) by using the two groups.

3. Evaluation Process

The framework of this method comprises three main parts: obtaining the evaluation parameters and establishing the cloud model and risk assessment, as shown in Figure 2.

The basic idea of fuzzy evaluation is to use the fuzzy set transformation to describe the fuzzy boundary of each evaluation factor, with the membership value (according to the level standard of the evaluation factors) and the weight value of each factor, construct the corresponding fuzzy evaluation matrix, and identify the risk level of the evaluation object through multiple operations [28].

Previous studies have used data over the past few years, as research databases, to determine the weight of the indicators [29, 30]. However, the database formed by retrieving such past data is not comprehensive, lacks relevance, and can only explain the problem of the norm. For this reason, the weights of each factor are determined by combining the subjective and objective weighting methods. First, the AHP method usually calls for several experts to compare the importance of each evaluation index and give scores (1–10 points) to increase the accuracy of the results [31, 32]. However, since the results of the AHP method are influenced by human factors, to balance the subjective influence on weighting, the objective weighting method of CRITIC is adopted to weigh the evaluation indicators in combination with the scoring results to realise the subjective and objective comprehensive weighting results.

The CRITIC method is based on the conflict between the contrast strength and the index to comprehensively measure the objective weight of the index, consider index variability and the correlation between the indices [33]. This does not mean that the bigger the number is, the more important it is, but it implies that the objective attribute of the data itself can be used for evaluation [34]. The principle is as follows.

3.1. The Raw Data Matrix Is Formed

The original matrix is formed according to the scoring results of N indicators given by P experts:where xij is the evaluation value of the jth expert of the ith index.

3.2. Data Normalisation

Consider n indices X1, X2,X3,…, Xn, where Xi={x1, x2,…, xP}. Assume that the normalised value of each index data is Y1, Y2,… Yn because the smaller the value of the index is, the safer it is, according to [34]:

3.3. Indicator Variability and Conflict

Variability is expressed in terms of standard deviation:where Si represents the standard deviation of the ith indicator.

The symmetric matrix R with dimension I × I and element Rij is constructed. According to [35, 36], the Pearson correlation coefficient ρij can be used instead of rij to provide a more general metric relationship between the connection vector Rj and Rk, which is the rank of the correlation coefficient between the two evaluation indices j and k:

and are the mean value of correlation coefficient for item j and item k, respectively.

3.4. Empowerment

Ci is defined as the importance coefficient of the index, indicating the role of the ith index in the overall index system. The greater Ci is, the more significant is its role, whereby more weight should be given:

Thus, the objective weight of the ith index is and can be defined as

The digital eigenvalue of the single factor can be obtained through the reverse cloud generator after the scoring of the single factor is done by experts. In single-factor weighting, polynomial fitting is performed to obtain the overall digital characteristics:

According to the transformation formula of the normal cloud model, the expectation Ex, entropy En, and hyperentropy He of the boundary cloud model of each evaluation index classification grade can be obtained. Combined with [37], the boundary of each risk grade is obvious, and there is no ambiguity of ownership. Therefore, the conversion formula is :where Cmax and Cmin correspond to the maximum and minimum boundary values of the grading standards, respectively, and K is a constant. According to [38], K = 0.1 En.

4. Practical Application Example

In Section 4, a practical example will be used to compare the results of the cloud model fuzzy evaluation and numerical calculation.

4.1. Description

The project area was located in the mountainous area of the southern margin of Yunnan Plateau. The size of the tunnel excavation section was 2.9 m × 2.9 m. The excavation section was excavated by the new Austrian tunnelling method (NATM) blasting. In the entire section of the new tunnel, there were three locations under the existing Kunhe Railway, taking the tunnel length of K0 + 109.7 (Kunhe Railway K311 + 230, tunnel buried depth of 15.68 m) as an example.

4.2. Establishing a One-Dimensional Cloud Model

Based on the recommended standards [39], the relevant literature [4042] and field investigations in the Guidelines for Safety Risk Assessment in Highway Bridge and Tunnel Engineering Construction, 12 parameters, as shown in Figure 3, were finally determined as safety risk assessment indices. The corresponding index risk grade boundary was then determined.

The indicator risk grade boundary was divided into five risk grades: low risk, lower risk, medium risk, high risk, and extremely high risk [43, 44]. According to formula (21), the cloud digital eigenvalues of the index risk grade were obtained, as shown in Table 1.

The digital eigenvalues of the graded cloud in Table 1 were processed and calculated with MATLAB according to the normal cloud generator to obtain the risk level boundary cloud chart, as shown in Figure 4.

First, seven experienced field experts in engineering construction, safety management, and blasting engineering were invited to compare the importance of the 12 evaluation indicators according to the actual situation and give a score between 0 and 10 points. Thereafter, the weight of the evaluation index was calculated according to formulae (12)–(19) by adopting the CRITIC method. According to the scoring situation, the cloud model digital characteristics of the indicators were calculated using formulae (3)–(5). The parameters of the indicators are shown in Table 2:

The correlation coefficient matrix ρjk was calculated according to formula (16).

The ρjk calculation results are as follows:

The correlation coefficient vector Rj was calculated according to formula (17):

The weight of the evaluation index was calculated according to formula (19). According to the scoring situation, the digital characteristics of the cloud model of the index were calculated by using formula (5). The calculated parameters of the index are shown in Table 2.

The digital features of the whole were calculated by formula (20):

According to [45], when N is 1,000, the error is relatively small, and the confidence of the calculated results is relatively high. Therefore, the cloud droplet number N is 1,000. A cloud model was built according to the overall digital characteristics. By comparing with the risk level boundary cloud model, the calculation results of the one-dimensional cloud model could be obtained. The results are shown in Figure 5.

According to the one-dimensional cloud model calculation, the risk level of blasting construction at this place was extremely high risk. In addition, it can be seen, intuitively, from Table 3, that the index with the greatest risk is the tunnel section area, the distance between the tunnel and the existing structure, and the blasting charge.

4.3. Boolean Calculation of the Cloud Model

To refine the evaluation, the indicators were divided into two dimensions. The 12 risk indicators were first divided into tunnel risk factors and the existing structure risk factors. The two-dimensional indicators are shown in Figure 6.

First, we tested the evaluation method of the weighting index groups and then returned to a one-dimensional cloud model soft ‘and’ computing of the cloud model. According to formula (10), the digital eigenvalue of soft ‘and’ can be calculated as follows:

The resulting cloud model is shown in Figure 7.

The cloud model was established according to the overall digital characteristics [7.8134, 0.2770, 0.0423]. The comparison results with the risk grade boundary cloud model are shown in Figure 8.

According to Figure 8, from the normal distribution probability formula, it can be concluded that 82% of the risk level of blasting construction at this place belonged to high risk and 16% belonged to the extremely high risk.

4.4. Establishing the Two-Dimensional Cloud Model

The self-risk and the risk of the existing structures adopted a unified standard, and the boundary division of the index risk level was unchanged. The boundary division of the risk level refers to the antibarrel principle. When a single factor reached a risk value, the whole belonged to this risk level. After obtaining the cloud digital characteristic values of the risk grade wiring, according to the expected curve function of the two-dimensional cloud model, the two-dimensional cloud picture of the risk grade boundary was obtained, as shown in Figure 9.

The model was projected to the xoy plane to divide the risk level boundaries more intuitively, as shown in Figure 10.

Although the indicators were divided into two dimensions, the connection between the indicators was not broken. To calculate the correlation coefficient, the whole should be calculated in a unified way while the corresponding weights and digital features of the cloud model should be calculated separately. The calculation results are shown in Tables 3 and 4.

The overall numerical characteristics are

The overall numerical characteristics are

According to the characters of the whole digital [6.2924, 0.0327, 0.0256, 8.3465, 0.0506, 0.0396], the cloud model was established. The comparison results of the cloud model and the risk level boundary are shown in Figure 11.

According to the results calculated by the two-dimensional cloud model, the maximum probability of blasting construction at this place belonged to the extremely high risk. According to the normal distribution probability distribution, about 0.4% was at high risk and 99.96% was at extremely high risk. As shown in Figure 11, tunnel risk is the short board of the overall risk assessment, and the risk of the existing structures is the dimension with the largest risk. As can be seen from Table 5, the index with the largest risk is the distance from the existing structures and the amount of blasting charge.

4.5. Numerical Calculations

The software FLAC 3D was used to build a three-dimensional tunnel model for numerical simulation. To simulate the vibration rate of particles gradually decaying due to the increase in detonation center distance, the Sadolfsky empirical attenuation formula was employed for analysis and processing [46]:where is the particle vibration velocity (cm/s), Q is the amount of initiating charge (kg) at the maximum stage corresponding to the vibration velocity , r is the distance to the blasting center (m), k is the coefficient related to geological and environmental conditions and rock characteristics, and α is the attenuation index of the blasting seismic wave related to the geological conditions.

According to [47] and the special blasting design scheme, the detonating charge Q is 11.7 kg. The parameters k and α were selected by referring to Table 6 [48].

The rock between the top arch of the diversion tunnel and the railway subgrade mainly comprises high quantities of weathered rock, with low strength and low hardness. The K value was 250, and the a value was 1.8. The particle vibration velocity of Kunhe Railway (K311 + 230) at the tunnel mileage K0 + 109.7 was 7.7 cm/s, by substituting Q, r, k, and α values, respectively.

In the 30th blasting cycle (S5), the tunnel was excavated to the bottom of the railway line, where the shortest blasting distance was found.

4.5.1. Establishing the Model

Model Parameters and Boundary Conditions

The relevant mechanical parameters used for calculations are shown in Table 5.

4.5.2. Establishing the Model

A three-dimensional model was built along the tunnel within a range of 30 m. According to Saint-Venant’s principle, to avoid the boundary effect during modelling, the width of the whole model should be greater than or equal to 5 times the tunnel diameter [49]. The model was set along the direction of the tunnel as the X-axis, vertical direction as the Y-axis, and vertical direction as the Z-axis. The angle between the tunnel and the existing railway was 67°. The calculation model is shown in Figure 12.

The free-field boundary was applied around the model to reduce the influence of the boundary effect on the model, and the static boundary condition was used at the bottom of the model to facilitate the application of dynamic load. Following the usage of boundary conditions, the model is shown in Figure 13.

4.5.3. Analysis of the Numerical Calculation Results

The displacement cloud maps of the railway in the X, Y, and Z directions, after the 30th blasting, were extracted, as shown in Figure 14.

As shown in Figure 14, the maximum displacement in the vertical direction of the railway after the 30th blasting was 13.47 mm. The displacement values in the X and Y directions of each point of the railway were extracted, and the maximum displacement value in the horizontal direction of the railway after the 30th blasting was 5.54 mm.

Through numerical calculations, the horizontal and vertical displacements at the different construction stages are summarised in Figure 15.

As shown in Figure 15, the horizontal displacement of the existing railway begins to change significantly when the tunnel blasting construction reaches the S2 level. As the blasting excavation continues, the horizontal displacement continues to increase. In the S5 stage of tunnel blasting construction, the horizontal displacement of the railway reaches the maximum value of 5.54 mm. With the blasting construction taking place away from the railway centerline, the horizontal displacement gradually decreases. The vertical displacement of the existing railway follows the same rule. In the S2 stage of blasting construction of the new tunnel, the displacement begins to change significantly. As the construction continues, the vertical displacement continues to increase. In the S5 stage of tunnel blasting construction, the vertical displacement of the existing railway reaches a maximum of 13.47 mm. With the tunnel blasting away from the centerline of the railway, the settlement of the existing railway gradually decreases. According to the control standards issued by the Railway Management Department, by relying on the past experiences of similar projects in China and by considering more comprehensive reasons, such as construction, the deformation control indices of the vertical and horizontal displacements of the railway lines were determined as 4 mm. In summary, based on the numerical simulation results, the risk level of blasting construction at this place should be extremely high.

The calculated results are summarised in Table 7.

5. Results and Discussion

To address the fuzziness and randomness of risk assessment, a cloud model association algorithm was used to solve the uncertainty problem during risk assessment. This study outlines three cloud model evaluation methods and compares and analyses these methods, including performing a comparison with the numerical calculation results. The research results provide a fast and effective evaluation technique for conducting the risk assessment of existing structures. This study will arouse the interest of researchers and is an improved method for fuzzy evaluation, making it more efficient and rigorous and providing a strong guarantee for engineering construction.

The calculation results show that the cloud model can provide essential information through a simple calculation method, and the accuracy of the results is high. The probability results were close to the real values.

It can be seen from Table 7 (numerical calculation results) that the one-dimensional and two-dimensional cloud models were extremely high risk. The soft ‘and’ having only 16% probability was extremely high. It can be seen that the distribution probability of soft ‘and’ calculation was dispersed due to the operation of ‘and’ in two dimensions, making the calculation of soft ‘and’ more obviously affected by indicators, resulting in low accuracy.

In contrast, the one-dimensional and two-dimensional cloud models were feasible in terms of precision and performed well in terms of their ability to express uncertain relationships. The one-dimensional cloud model could reflect the overall evaluation level well and distinguish the relatively large risks of those indicators, thus improving the overall risk level. The two-dimensional cloud model was richer in information than the one-dimensional one and could show which dimension of the index risk was relatively greater, including the index risk under that dimension that was relatively greater.

The indicators of the one-dimensional and two-dimensional cloud models were the same. The weights of the two dimensions of the two-dimensional cloud model were calculated, respectively, and the dimensions with the highest risk represented the overall risk level. The calculation results of the one-dimensional cloud model were evaluated by the comprehensive weighting of indicators. However, the accuracy difference between the one-dimensional and two-dimensional cloud models was not great, indicating that the AHP-CRITIC weighting method determines the weight of the index weight by mining the information of the index, making the results credible.

Additionally, there also exists some limitations in this study.(1)There are other methods or models also suitable for risk evaluation project. Li [50] thinks that the risk evaluation is a primary but important task for technological innovation projects, and this task is a multiple criteria group decision-making (MCGDM) process with probabilistic uncertainty and fuzzy uncertainty. Thus, in the future study, we will propose several more reasonable operational laws and some novel decision-making methods and apply these new methods into the risk evaluation of technological innovation project.(2)Since the calculation results of the one-dimensional and two-dimensional cloud models were reflected in the form of probability, the problem of the cloud model was determining what probability could be ignored. Due to the fuzziness of the evaluation, the boundary of the evaluation result was also fuzzy, such as 70% probability at low risk and 30% probability at medium risk. Thus, the condition to be assessed was what level of risk could be considered low risk. For the two-dimensional cloud model, there was a multicritical problem faced. Further research is required to consider the role of multicritical points. In addition, it is found that the AHP-CRITIC method was affected by the size of the dataset when determining the weight. A small dataset had no significant effect on the subjective balance.

Through the comparative study of the three methods, it can be seen that the calculation accuracy of the one-dimensional and two-dimensional cloud models was satisfactory. In future research, we will be studying how the results of the cloud model calculation can be refined. If machine learning is used to optimise the cloud model, the results would be more accurate. Further studies should also be done to attain the best balance of subjectivity of the AHP-CRITIC method when the dataset is minimum. Similar to [51], adding the probability for each index could express the probability information of many possible values on an index and enable dealing with the situation more effectively when the probability information is partly known. Additionally, the condition of how to set the safety factor of one-dimensional and two-dimensional cloud models during the evaluation can be studied so that the calculation results will not be controversial.

6. Conclusions

To address the fuzziness and randomness of risk assessment, a cloud model association algorithm was employed to solve the uncertainty problem during risk assessment. Subsequently, three methods were outlined and analysed and compared with the numerical calculation results of the cloud model. The research results provide a fast and effective evaluation technique for conducting the risk assessment of existing structures. This study will attract the interest of researchers. This study can provide an improved method for fuzzy evaluation, making it more efficient and rigorous and providing a strong guarantee for engineering construction that can be applied globally.

Data Availability

The data used to support the findings of this study are included in this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China of the North China University of Technology (no. 52178378).