Abstract

This research looked into the viability of using metaheuristic algorithms in conjunction with an adaptive neurofuzzy system to predict seismicity and earthquakes. Different metaheuristic algorithms have been combined with an artificial intelligence (AI) algorithm. Subjected to seismicity is a promising factor. The new sensors have many advantages over the older, more impressive-looking ones, including (a) a generally linear relationship between the measured values and real ground motion (described above), (b) the ability to measure three orthogonal components of ground movement in a single unit, (c) sensitivity to a very broad range of frequencies, and (d) high dynamic range, which allows for the detection of both very small and fairly large tremors. To accept the acquired results as a hybrid model of an adaptive neurofuzzy inference system with particle swarm optimization (PSO), genetic algorithm (GA), and extreme machine learning (ELM) (ANFIS-PSO-GA-ELM) implemented. According to the dataset, all approaches produce excellent and realistic predictions of seismic loads; however, the method ANFIS-PSO produces better results. All the strategies demonstrated a high level of predictability. Finally, this research urges researchers to investigate the performance of triple hybrid MT algorithms using a variety of hybrid metaheuristic methodologies, rather than the existing double hybrid MT algorithms.

1. Introduction

An earthquake is the shaking of the earth’s surface which causes seismic waves is suddenly release of energy in the lithosphere. The area’s seismicity defines the frequency, kind, and size of earthquakes experienced throughout time. The creation of positive electrical charges in the ground as a result of plate movement driven by rock compression could explain for odd electromagnetic signals before an earthquake. Time, epicenter distance, and peak ground acceleration are the three most essential input variables in determining earthquake magnitude. The seismometer tool measured the magnitudes of the earthquakes using the most common scale, the moment magnitudes. Many research groups are working hard to keep an eye on the risk of an earthquake, but earthquake forecasting and prediction are difficult. Earthquakes have been more common in recent years, causing unrelenting fear in people’s lives and economy. Geoscientists and structure engineers have attempted to protect from the earthquake devastation, but the only solution is to precisely and timely estimate the magnitude of the earthquake. With the help of technical advancements, recent advancements in seismology have given prominence to the crucial information concerning earthquakes. Fuzzy logic (FL) and neural networks (NN) are the expert systems having the ability of human reasoning and produce a technical approach. Earthquake forecasting and prediction is a subfield of seismology related with the magnitude, time, and area of future earthquakes [1]. Soft computing literature demonstrates that soft computing offers a superior solution to uncertainty concerns [2]. Fuzzy logic deals with uncertainty by offering a human-oriented knowledge representation and artificial neural networks (ANN) for self-learning and rule generalization [3]. NN and FL are integrated to create the neurofuzzy system (NFS), which includes learning and reasoning abilities, respectively, which are merged in ANFIS to provide superior prediction capabilities. [4]. The NN used to address many complicated issues in a variety of disciplines. [510] used neural networks to investigate the item and image. Artificial neural networks incorporating swarm algorithms, such as the fruit fly optimization algorithm (FOA), have proven to an effective method for determining the best classifier. For earthquake data, NN RBF is employed as a classifier [11]. The classification challenges are solved with ANN, support vector machines (SVM), and classification trees (CT) [12]. The classification problem-solving approaches include logistic regression Bayesian network, discriminative analysis, fuzzy logic, linear regression, -mean clustering methodology, and evolutionary algorithms. Hybrid techniques, such as neurofuzzy-based classification, fuzzy probabilistic neural networks, and recursive partitioning of the majority class algorithm, have been used. Using neural nets and aFOA, the creation of classifier for the categorization [13]. ANN and ANFIS show how earthquake occurrence can be used in a variety of ways in Iran [14]. Their research looked at the temporal-spatial variations in the parameters of seismicity in Iran’s neurofuzzy-based classification. In Iran earthquake, the principal component analysis standardizes the data to predict the magnitude of earthquake by ANFIS. An ANFIS system was used to explain grid partition; future earthquake predictions, subtractive clustering, and fuzzy -means are among the algorithms used in ANFIS modeling for earthquake prediction using data from Iran earthquakes [15]. In the Indian Himalayan Region, the prediction of peak ground acceleration was compared using input as region of earthquake and output as moment of the earthquake by two models ANN and ANFIS [16]. Several researchers have used soft computing techniques to evaluate characteristics of earthquake-related research, and researchers have reported the mapping of damage before happening of earthquake using remote sensing in conjunction with a soft computing strategy [17]. In comparison to neural systems, a critical advantage of neurofuzzy systems is their ability to reason about a specific condition. The researcher presents a comparison in this study in order to forecast the magnitude of the earthquake. As a result, a comparison of methodologies will aid in the magnitude prediction of future earthquakes. In light of the aforementioned research activities, this work presents a comparison of ANFIS with alternative approaches for predicting earthquake acceleration and proposed the triple hybrid model.

2. Literature Review

In this research study [18], Kamath and Kamat proposed the ANFIS techniques in the Andaman Nicobar to forecasting the magnitude of earthquake. A synergistic effect was generated by combining ANN and FIS systems. The European-Mediterranean Seismological Centre provided a 956-earthquake dataset from October 1, 2004 to February 20, 2016. Major variables are depth, previous seismic events magnitude, latitude, and longitude used to calculate the magnitude of incoming earthquakes in the study. ANFIS is a model that combines ANN for learning capacity with fuzzy system to compute numerical language grade estimations. For model training, the ANFIS architecture consists fuzzy nodes of subtractive clustering (accept ratio: 0.5, range of influence: 0.2, reject ratio: 0.15, and squash factor: 1.25) and a hybrid algorithm with eight fuzzy rules. The model of ANFIS produces accurate results faster on the inputted dataset, and the model’s performance is tested using root mean squared error (RMSE). This research concludes that using ANFIS, an intelligent approach for earthquake magnitude prediction may be built. In this research [19], Nguyen et al. used PSO to optimize an ANN to solve problem of ground response. This study looks to compute the deflection of building in horizontal manner after considerable seismic loading by using the hybrid PSO-based ANN approach (e.g., the input data taken from Chi-Chi earthquake database). To execute the series of finite element (FE), the training and testing were done by PSO-ANN. The FEM simulation accounting for 80% and 20% on the total dataset, the dataset had 2081 and 8324 testing training testing datasets, respectively. The inputs include Chi-Chi earthquake soil elastic modulus (), dynamic time (), dilation angle (), friction angle (), bending stiffness (EI), structure axial stiffness (EA), unit weight (), and Poisson’s ratio (), with output as horizontal deflection (Ux). The results explained the reliability of PSO-ANN is more in assessing ground response and horizontal displacement in short structures following an earthquake. The study [20] by Hasanipanah et al. used the conjunction of particle swarm optimization with ANFIS to illustrate forecasting rock fragmentation. In this model, the values of rock fragmentation, burden, spacing, specific charge, maximum charge, and stemming used per delay were measured on the 72 blast events. Research work focused on the two hybrid techniques [21]; for forecasting peak ground acceleration (PGA), parameters are investigated. The accuracy of the ANFIS approach has been improved with the implementation PSO and GA with ANFIS. The developed hybrid model was implemented on the Pacific Earthquake Engineering Research (PEER) center data. In this hybrid model, the magnitude, source to site distance, faulting mechanisms, and average S-wave velocity of earthquake were used. In comparison to the ANFIS and a few soft computing techniques, the generated model PSO-ANFIS-PSO and GA-ANFIS-GA performed well. The developed model estimate the PGA parameter based on the acquired results, but the PSO-ANFIS-PSO model gives more effective and better results. ANFIS with PSO and GA [22] were used in this study to develop hybrid models for ground vibration forecasting. The data samples 86 was created from two quarries in Iran to develop prediction models. The input parameters are burden, spacing, stemming, distance from the blast locations, powder factor, and max charge per delay (MCD) whereas peak particle velocity (PPV) is the output value. As per the results of sensitivity analysis, the MCD was discovered to be the most efficient PPV parameter. To assess the models’ applicability and efficiency, the determination coefficient () and root mean square error (RMSE). The model ANFIS-GA and ANFIS-PSO results demonstrated the capability of accurately forecasting the vibration of ground. In comparison with model ANFIS, ANFIS-GA exhibited a 61% (percent) reduction in RMSE and increase ten percent in , while the model ANFIS-PSO explained a 53% (percent) reduction in RMSE and a nine-percent increase in . With the GA and PSO, the ANFIS performance has been improved.

3. Analytical Techniques Performed

The analytical techniques were presented, and their architecture was displayed for better understanding.

3.1. Seismic Devices/Sensors

The frequency and severity of earthquakes are on the rise around the globe. Global geological scientists have verified that the Himalayas are due for a devastating earthquake. Important structures should be equipped with seismic sensors and SHM sensors (for measuring things like 3D deflection, vibration, stress, and strain). Figure 1 demonstrates the working of seismic sensors and SHM. Scientists can evaluate the damage from even moderate earthquakes with the aid of seismic monitoring equipment and devices. Then, by utilizing mathematical models, we can extrapolate and foretell the extent to which buildings can be damaged by major earthquakes. The seismic activity of an earthquake can be recorded by seismographs. They are part of a global seismographic network and are deployed underground in locations all over the planet. The seismograph records the ground motion just where the instrument is placed. A seismometer consists primarily of four parts: the seismic sensors themselves, a data gathering and storage unit, a power system, and telemetry for transmitting data in real time to data centers. The latter two systems are typically constructed using off-the-shelf parts, such as batteries, chargers, and solar panels for the energy system, and internet service hardware, cell modems, radios, and/or microwave links for the telemetry. Typically referred to as “data loggers,” these devices are equipped with a digitizer, a time stamping mechanism, and software for transmitting data in a uniform format over a telemetry network. The seismic sensors represent the station’s highest level of specialization. What if the sensor is affixed to the ground, how would you detect any ground motion? The inertia of a mass is exploited by two types of measuring instruments known by distinct names: seismometers and accelerometers. The basic idea is to use a spring or pendulum to swing a heavy object back and forth. In spite of the fact that the ground is shifting, the mass has not yet budged, not until the spring or pendulum has been extended far enough. In order to keep track of the instrument’s readings, we will need to secure a writing implement to the mass and some sort of paper to the ground. The seismometers and accelerometers available today are far superior to their predecessors. To prevent the mass from shifting with respect to the ground, they employ feedback loops. In order to store and telemeter the data, they use magnets and capacitors to generate voltages and currents that can be easily monitored by digitizers. We can utilize physics to precisely match the measured values to what the ground actually did because the seismometer’s or accelerometer’s mechanical system never really leaves its “rest position.” The addition of the word “meter” to the name of these instruments reflects the fact that they are, in fact, a meter, a device that measures and displays values over time, rather than a pen and paper. The new sensors have many advantages over the older, more impressive-looking ones, including (a) a generally linear relationship between the measured values and real ground motion (described above), (b) the ability to measure three orthogonal components of ground movement in a single unit, (c) sensitivity to a very broad range of frequencies, and (d) high dynamic range, which allows for the detection of both very small and fairly large tremors. Think about how old recording paper was: maybe 1 ft (30 cm) in height. From the smallest to the greatest ground motions, a current sensor would need a sheet of paper roughly 3 miles (5 km) tall to record everything. That is not really useful. The data logger is an important advance because it makes the data digitally available, enables them to be transmitted via contemporary telemetry options, and enables them to be immediately processed in computers to provide earthquake information rapidly and even earthquake early warnings. Excellent in both sound and appearance, these instruments are a real find. Both seismometers and accelerometers suffer from a single major flaw. The gap between a little earthquake (M0) and the greatest quakes we have ever recorded (M9.5) is on the order of 3,000,000,000, while the smallest to largest signal any of them can measure is on the order of 10,000,000 (10 million) (3 billion). Therefore, we set up both locations jointly. An earthquake between magnitudes M-1 and M-4 can be recorded accurately by the seismometer in the vicinity. All the way from magnitude three (M3) to magnitude eight and a half (M8.5) can be detected by an accelerometer. When the earthquake is large enough and far enough away from the station, both sensors will pick it up. To capture the full picture of what is going on, we set up seismometers and accelerometers wherever possible.

3.2. ANFIS: Adaptive Neurofuzzy Inference System

Neurofuzzy inference system, also known as an ANFIS (adaptive network-based fuzzy inference system) a type of ANN [2325]. The ANFIS is a neurofuzzy technique fused together with neural network and the fuzzy inference system. In terms of functionality, the ANFIS model is similar to the radial basis function network (RBFN) [26]. In driven of data techniques for integration of ANFIS nets, the clustering of a training set of data samples of function is approximated. Rule-based management process, pattern recognition, classification tasks, and various other challenges have all been effectively solved using ANFIS networks. The FIS combines Takagi-Sugeno-Kang’s fuzzy model [27, 28] to define systematic way to produce fuzzy rules on an input data and output data. A rule-based system whose antecedent is composed of linguistic variables and the consequent is represented by a function of the input variables was made. Its inference method is based on a set of fuzzy rules (IF-THEN) with the capacity to approximate nonlinear functions through learning. The following step of the algorithm that is developed for generating the TSK fuzzy model. The algorithm starts iteratively by fuzzy clustering the data. In each iteration, the input and output data clustered with an increased number of clusters. The steps are structure identification and parameter identification.

3.3. ANFIS Architecture

Let us suppose the FIS has two input variables ( and ) and a single output (). The fuzzy rules of Takagi and Sugeno [29] are included in the rule base:

In the antecedents, and are set of fuzzy, and in the consequent, is a crisp function. and are input variables, and is polynomial.

When is constant, a Sugeno fuzzy model for zero-order emerges, which can be as Mamdani FIS special example [30], where the each rule of consequence given through fuzzy singleton. If is first-order polynomial, the Sugeno is first order.

A first-order Takagi-Sugeno fuzzy approach is as follows:

For Takagi-Sugeno model, reasoning approaches are given in Figure 2, and Figure 3 explained the ANFIS model architecture, for comparisons with similar layer nodes to performing same functions.

Layer-1: every node has function node that is adaptable:

If input for node , is linguistic variable linked with function node, and ’s membership function is .

is usually selected as where is the parameter set of premise and is the input.

Layer-2: each fixed node computes strength of firing . The product of all input signals has an output node, which is provided as follows:

The node function can be -norm operator which performs fuzzy AND in general.

Layer-3: there is only fixed nodes. The ratio of th strength of firing rules to the sum of every firing strengths which calculated each th nodes. The normalized firing strength has th node’s output, which is given by

For convenience, outputs are referred to normalize firing strengths in this layer.

Layer-4: adaptive node with function node defined by where the firing strength is normalized in ANFIS, the third layer output is , and resultant set of parameters is .

The parameters refer in this layer as subsequent parameter.

Layer-5: the fixed node is one, which gives overall output as sum of all the input signal, i.e.,

As a result, we have develop an adaptive network which functions similarly to Sugeno fuzzy.

The premise parameters values can be taken as linear combination of subsequent parameters in ANFIS architecture.

The following is an example of the output 𝑓: in the following parameters and , the is linear fun.

Least square identifies subsequent parameters in forward-pass of learning process, and error signals derive the squared-error to output at every nodes in backward pass, where the output layer to input layer propagate backward. In a step of backward propagation, the parameters of premise were updated by gradient descent approach [31].

3.4. ANFIS with Particle Swarm Optimization Algorithm

Kennedy and Eberhart invented PSO, an intelligent evolutionary, stochastic optimization approach, that replicates social behaviors of flocking birds (1997). Every solution in PSO is a “bird” in space search that is referred as “particle” [32]. Optimization, multiobjective programming, combinatorial optimization, clustering, min-max disadvantages, classification, prediction, and more other applications of engineering should all be solved using the PSO application [33]. PSO has been effectively used in many engineering issues to solve [3436] due to the fast convergence rate of comparisons with other algorithms. ANFIS-PSO is global minimization which deals with flaws and returns at point or in -dimensional space. The concept on which it is based is random population generation and modeling and simulation of mass movement of fish. Particles in the same communication group accelerate towards particles with higher competence over time. Despite each method’s excellent performance on the challenges, continuous optimization problems are solved with tremendous success. ANFIS-PSO is better at solving prediction and forecasting problems.

Algorithms proceed iteratively, updating the velocities and locations as follows:

where is the nos. of dimensions (1, 2, · · ·, ).

The population size is (1, 2, · · ·, ).

The inertia weight is .

and are two positive constants.

and are random values in the range .

The th particle is new velocity calculated using the first Equation (11), which takes into account three terms: (i)Prior velocity of particle(ii)The difference between the particle’s prior and present best positions(iii)The distance between the swarm’s best particles

Equation (12) calculated the new position of a particle.

3.5. ANFIS with Genetic Algorithm

To derive the initial fuzzy model, the evolutionary algorithm is paired with a fuzzy logic system, similar to the ANFIS model. As the initial model of fuzzy was established, algorithm of genetic utilized through update generated fuzzy rules subsequent parameters to develop the ANFIS-GA system [37]. Evolutionary algorithm was used to fine tune model of fuzzy by updating subsequent parameters produced by the first fuzzy model. The evolutionary algorithm looks for best fit of fuzzy model’s subsequent parameter over full solution space. The lateral load output findings are more dependable in the ANFIS-GA approach, and for the first output, there was no difference between results of both phases of test and train. Figure 4 shows a GA-based fuzzy model flow chart for prediction and seismic data. In Figure 5, the technique of the ANFIS-GA model explains how the training and testing datasets are processed and ANFIS-GA model results located in the network.

3.6. Extreme Learning Machine Structure (ELM)

ELM is ANN learning approach. Machine learning for unique layer direct feed NN structure was supplied by ELMs, which employ the random projection concept and model of early perceptron to problem-solving approaches. ELM, or with only single hidden layer feedforward nets, is a feedforward ANN with a single layer of hidden neurons. To improve performance and avoid a time-consuming iterative training process, ELM [38, 39] uses a single layer or many layers as a machine learning system. ELM is a direct feedforward approach for clustering, sparse approximation, regression, compression, classification, and learning of features that does not tweak hidden node settings and uses them to interpret other findings, making it an authentic algorithm. Figure 6 shows the ELM model’s structure, which is partitioned into input, output, and hidden layer. ELM learning approach was developed alleviate the drawbacks of feedforward ANN, particularly in terms of learning speed.

4. Development of Computation Models

4.1. ANFIS-PSO-GA-ELM Model

The earthquake’s peak ground acceleration was calculated in terms of magnitude, focal depth, hypocenter, and S-wave (avg) velocity. Evolutionary algorithms have ANFIS, PSO, and GA techniques are developed using a filtered database with 18964 records. The training data was chosen from the 18964 data points, with 15171 (80%) serving as the training dataset and rest of 3793 serving as the testing dataset to assess model’s performance. The ANFIS-PSO, ANFIS-GA, and ELM are used to estimate the peak ground acceleration parameter once the datasets have been arranged. Tables 1 and 2 show the parameters of the PSO and GA algorithms, respectively, where the number of epochs is stopping criteria. The ANFIS-PSO and ANFIS-GA parameters listed in the table were selected as trial-and-error procedures. In addition, Figure 7 depicts the RMSE values evolution for hybrid approaches for number of epochs in PGA estimation.

This work will employ the ANFIS-PSO-GA method to optimize the parameters of hidden layer and weights of input layer as shown in Figure 8. The hidden layer produced offsets in random way using ELM. Also, to enhance ELM’s stability and accuracy, the used weights which are optimal and offset were taken as hidden layer bias and input weights. The ELM network structure is unstable; this research improves the ELM algorithm with using the ANFIS-PSO-GA algorithm to optimize the sets of randomly generated input weights and bias of the hidden layer, using the optimal and bias as ELM improving the network’s stability and algorithm’s accuracy. Create as input weight set and bias for hidden layer at random. The position vector of particle is set in particle swarm, specifically (, is the sum of dimensions and . It finds each particle came nearer to the best place and best particles in group via an iterative technique, resulting in the optimal solution of, b being found. The position and speed of particles are updated at each iteration:

The is particle p flying speed, the value range , the particle velocity set and range; and is learning factors, and the random numbers are and between the interval. The position range ; optimal position is found the particles; optimal position gps is found by the whole swarm particle; is inertial weight, and the maximum-minimum weight max and min with values 0.7 and 0.3. is the total number of epoch, and is the max number of epoch.

4.2. Application of Model

The data of this paper belong from the recorded data of Himalayan range of Himachal Pradesh. The sample of data shown in Table 3.

Assuming input and output , for earthquake prediction the experimental steps based on approaches of ANFIS-PSO-GA-ELM as follows: (i)Step-1: determination the sample train, test data, and dataset for prediction. The maintenance technique will be used to train the ANFIS-PSO-GA-ELM network in this study(ii)Step-2: to increase the APSO-ELM algorithm’s rate of convergence, normalize the sample set(iii)Step-3: using the Python programming libraries, create a three-layer ANFIS-PSO-GA-ELM network structure, then in each layer, determine the excitation function and numbers of neuron, choose Sigmoid as excitation function(iv)Step-4: optimize ELM parameters is used in the ANFIS-PSO-GA to obtain optimal parameter values of network, such as the input layer’s optimal value of hidden layer linked with weight (). With weight and bias, the output layer was connected with hidden layer(v)Step-5: the training effect of ANFIS-PSO-GA-ELM compared with other approaches, data is trained using the classic backpropagation neural network and ELM model, and then, the training accuracy of ANFIS-PSO-GA-ELM, ELM model, and backpropagation NN is compared(vi)Step-6: for earthquake prediction models ANFIS-PSO-GA-ELM, ELM was used for inputting the data and for comparing backpropagation NN accuracy

5. Discussion and Result

In the stages of training and testing, the correlation coefficient (), error indicator analyses, root mean square error (RMSE), and mean absolute error (MAE) can be defined in this fashion [39]. where the measured value was represented as , the prediction values were represented as , the data points number represented as , observation mean value was represented as , and predictions mean value was represented as [4043]. Table 4 shows the results of ANFIS-PSO-GA-ELM, ANFIS-GA, and ANFIS-PSO during the stages of training and testing. When compared to the other generated models, the ANFIS-PSO-GA-ELM model gave higher performance with accuracy (, , and ) during training stages. In comparison to other created models, the ANFIS-PSO network predicts the peak ground acceleration parameter with better accuracy (, , and ) during testing stages. Observed and predicted values of PGA showed in scatter plots for both phases of train and test using constructed models shown in Figures 9(a) and 9(b), for greater illustration. The ANFIS-PSO-GA-ELM model has reduced observed and predicted PGA scatter values than other developed model demonstrated. It is worth noting that the developed ANFIS-PSO, ANFIS-GA, and ANFIS-ELM models are extremely accurate. However, the ANFIS-PSO-GA-ELM triple hybrid model performs better than the hybrid model.

The generated models’ performance is also compared to that of certain well-known soft computing-based models, such as the ANFIS-PSO, ANFIS-GA, ANFIS-ELM, ANN-SA, GP, and GP-SA algorithms. Table 5 explains the findings of statistical error parameters for the given approaches as well as the constructed models.

To have strong prediction ability, the error of predictive model is independent to input factors. As a result, Figure 10 shows the ratios of projected PGA parameter to observed values for several created models for magnitude, focal depth, and Avg S-wave velocity. The accuracy of the model will deteriorate as the scattering in this figure grows. The ANFIS-PSO-GA-ELM model’s predictions are accurate in relation to the input parameters, as can be seen in these graphs.

As a result, it is shown in Figures 11(a) and 11(b), calculated using Brune model implementation on ANFIS-PSO-GA-ELM model optimized data. That result also helps to calculate the estimation of source parameter of earthquake.

6. Conclusion

It is quite difficult to make accurate predictions about the most important components of seismic loading. In this study, a soft computing method was employed to overcome the prediction problem by deleting a few unneeded parameters of input. Specifically, the parameters included in the study were the following: The ANFIS approach that was used to pick the relevant features for the purpose of predicting the components that were the most important in seismic loading. However, the ANFIS-PSO-GA-ELM method generates the most accurate and reliable forecasts of seismic loads among all of the approaches that were utilized in this work. All of the ways that were utilized produced outstanding and realistic predictions of seismic loads. In addition, the ELM nets have shown the best results when it comes to the prediction of the value as a parameter. As a consequence of this, both the ANFIS-GA and ANFIS-PSO methods have been judged to be effective; however, the experimental findings in this earthquake prediction show that the extreme learning machine (ELM) method combined with the ANFIS-PSO-GA method yields superior results. As a result, it is recommended that the ELM method be used in neural networks in order to achieve the best possible outcomes. The predictions made by the ANFIS-PSO-GA-ELM model are accurate, with a trend that is less noticeable in relation to the parameters that were input. In the future, research should concentrate on the performance of algorithms that make use of hybrid metaheuristic techniques. Particular attention should be paid to triple hybrid approaches.

Data Availability

The numerical data collected from various stations are used for fact findings of this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research is supported by the Princess Nourah Bint Abdulrahman University Researcher Supporting Project number PNURSP2023R195, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia.