Abstract
The discount rate input parameter of Net Present Value (NPV) in mineral project evaluation is a function of a risk-free rate and risk premium component. To obtain a reliable NPV, it is important to estimate each of these components. This study employs a hybrid approach to predict risk-free rate using Discrete Wavelet Transform and Artificial Neural Network (DWT-ANN). The DWT-ANN model was tested using London Interbank Offered Rate (LIBOR) dataset from 1986 to 2020. The results showed that Discrete Wavelet Transform-Radial Basis Function Neural Network (DWT-RBFNN) of the three different hybrid algorithms developed and applied performed best in predicting the risk-free rate. This is because it achieved the lowest root mean square error of 0.0376 and the highest correlation coefficient of 0.9995. The DWT-RBFNN model can be a useful alternative tool for predicting risk-free rate, which is a key input parameter for the determination of discount rate.
1. Introduction
The risk-free component of the discount rate in mineral project evaluation is a key in the ultimate determination of the cost of borrowing in funding mining ventures. Discount rate is an important requirement for computing the net present value (NPV) of mineral projects. Globally, the process of evaluating mineral projects involves the use of US dollars in all costs and revenue estimations due to its relatively stable nature. As such, the London Interbank Offered Rate (LIBOR) is mostly used as the base rate (risk-free) upon which a spread (or risk premium) is added to determine the total cost of borrowing from banks to fund mineral projects. LIBOR, which is the primary benchmark for short-term interest rates around the world, is also considered a key measure of liquidity risk in the global monetary market [1]. Reference [2] assert that LIBOR is considered the prime rate and is used as the benchmark for computing the price of almost all interbanking products including floating rate agreements, interest rate swaps, currency options, and futures. Although loans contracted from banks for funding mineral projects are usually dollarised and relatively stable, a careful study of the LIBOR data over the past 34 years (1986–2020) indicates significant non-stationary characteristics and nonlinearities. Therefore, it would be inaccurate to use average LIBOR values in estimating the discount rate for mineral project evaluation. That is, it would be highly misleading to use a simple mean (average) in determining the central tendency of nonlinear asymmetrical data such as LIBOR. This is because of the susceptibility of mean to the influence of outliers in nonlinear data [3]. Consequently, investigating the lending rates on US dollar loans to mining firms makes it imperative to accurately predict LIBOR values.
In literature, the Benford empirical method has been commonly considered the standard classical approach for predicting LIBOR. However, the main strength of the Benford approach lies in the detection of errors in LIBOR data as presented by different lending institutions. In fact, the Benford is not a forecasting tool and cannot adequately handle the non-stationarity behaviour exhibited by the LIBOR time series data [4]. In addition, it is discovered in the literature that LIBOR data does not always necessarily obey the Benford’s Law and hence the approach can give significant deviations in predicting the LIBOR values [4]. This assertion is confirmed in [4] where Benford second digit reference distribution method was applied to track the daily LIBOR over the period from 2005 to 2008. It was revealed in their research findings that over an extended period of time there were significant departures between the Benford law and empirical second digit distributions. The study therefore called for alternative objective predictive ways to track LIBOR data.
With the rapid development of artificial intelligence (AI) methods, artificial neural networks (ANNs) provide a new method for modelling and solving high-dimensional nonlinear and non-stationary characteristic problems in financial data analysis [5]. Previous studies show that AI methods have been applied to predict the behaviour of LIBOR and interest rate in the money market [6, 7]. The motive behind these studies was to overcome the weakness of the classical Benford empirical method of estimating LIBOR, which is the basis for estimating interest rates. The AI methods applied in the literature to predict LIBOR and other related base rates which gave excellent performance with minimal errors have been reviewed and summarised in Table 1. It is evident from the literature pertaining to this study that AI techniques provide better solutions to handling LIBOR data complexity.
A review of related studies in Table 1 shows that the standalone AI techniques [7, 9] gave excellent predictions as observed in their mean square error (MSE), mean absolute error (MAE) values as well as convergence rate. However, it can be observed that the hybrid techniques [2, 6, 8] improved the predictions. Hybrid AI methods are known to perform more superiority compared to the single model algorithms by combining and complimenting each other in terms of their strengths and weaknesses. That is, hybrid methods integrate the best features of neural networks for better training and generalisation.
It has also been found in modern literature that wavelet coupled ANN-based models produce compelling results than the standalone models [11]. That is, wavelet hybrid models are known to perform better with ANN techniques because the wavelet function works as a preprocessor to increase the accuracy of the ANN prediction [12]. The Discrete Wavelet Transform (DWT) function is produced as a decomposer, which has the ability to translate the main time series data into different frequency components, and subsequently use the resultant components for data driven models. By this, the DWT-ANN hybrid driven models use multi-scale input data by capturing valuable pieces of information obtained from the main time series datasets [11] leading to enhanced prediction performance. In view of that, the wavelet transform has been coupled with AI methods in several engineering fields which produced acceptable levels of results [13–19]. Therefore, the DWT was adopted and integrated into ANN for effective input selection and enhancement of ANN prediction accuracy [20, 21].
The essence of such a hybrid technique is that the DWT has the ability to reveal aspects of data patterns, breakdown points, and discontinuities that other signal analysis techniques might fail to deal with [22]. The DWT also has localisation capability in both time and frequency domains [23] and can denoise time series data and reduce any uncertainties with respect to the LIBOR data in the market. Besides, the DWT is simple to use and requires less computing power [24, 25]. To the best knowledge of this research, despite the enumerated strengths of the DWT, no hybrid method found in previous existing works utilised the benefits of coupling the DWT approach with any ANN method to predict LIBOR.
This study, for the first time, combines the benefits of DWT analysis with different ANN methods to improve the accuracy of LIBOR predictions. The proposed hybrid models developed and implemented are the Discrete Wavelet Transform-Backpropagation Neural Network (DWT-BPNN), Discrete Wavelet Transform-Radial Basis Function Neural Network (DWT-RBFNN), and Discrete Wavelet Transform–Generalised Regression Neural Network (DWT-GRNN). To this end, the main contributions of this study are to: (i) develop an AI model which employs the strengths of DWT to preprocess and prepare nonlinear time series data adequately for the prediction of LIBOR; and (ii) explore a hybrid DWT-ANN model for improving the accuracy of LIBOR prediction for the determination of discount rate in mineral project evaluation. This way, LIBOR which serves as a base rate (risk free rate) upon which risk premium is added to obtain discount rate could be accurately determined. This is vital because mining ventures come with enormous uncertainties and thus must be economically evaluated using realistic discount rate. In the study, the discrete wavelet transform algorithm has been applied to decompose the LIBOR time series datasets into various frequency components. Subsequently, the decomposed components obtained were utilised as inputs for the BPNN, RBFNN, and GRNN models. Then, the various predicted frequency components were reconstructed to obtain the final value. Finally, the predictions of the developed DWT-BPNN, DWT-RBFNN, and DWT-GRNN hybrid models were quantitatively compared using statistical tools.
2. Data Description
To build the hybrid (DWT-BPNN, DWT-RBFNN, and DWT-GRNN) models for predicting risk-free discount rate, a total of 416 LIBOR datasets was obtained from Macrotrends LLC via https://www.macrotrends.net/1433/historical-libor-rates-chart. The data spans from 1st January, 1986 to 1st September, 2020. The dataset shows a nonlinear discontinuous characteristic as indicated graphically in Figure 1. A summary descriptive statistics of the dataset is presented in Table 2.

3. Methods Applied
3.1. Discrete Wavelet Transform
DWT analysis is time-dependent spectral process that decomposes time-series in the time-frequency space. The technique provides a time-scale architecture and breaks down the data series into wavelet versions of the original (mother) [22]. DWT allows the use of longer intervals for low-frequency information and shorter intervals for high-frequency information. The decomposition of the original signal (X(t)) into i decomposition levels is shown in the following equation [26].where a and d depict the low and high frequencies, respectively.
The a and d frequency components of X(t) are considered with the same length of the original data to avoid prediction errors [27]. The reconstruction of the decomposed data, for example, Dubuachie level 4, is carried out by summing the frequency components as shown in .
These decomposition levels (a4, d1, d2, d3 and d4) are generally defined as a set of basic function () that can be generated by decoding and scaling ((3)[28].where c and b are dilation and decoding factors.
In DWT, c = 2i and b = k2j, where k is a location index running from 1 to 2jn and n is the number of dataset. (3) can then be rewritten as .
Similarly, for a basic wavelet function φ(t) translated by (τ) units, an inner product on time series X(t) can be carried out at different scales, (α) and an integral transformation for the inner product is performed as indicated in .
3.2. Backpropagation Neural Network
BPNN is one of the most widely used ANNs applications in the area of prediction and classification due to its strong nonlinear processing ability [29]. BPNN architecture uses a multilevel hierarchical feedback structure to adjust the network weights through the backpropagation algorithm. The network of BPNN consists of input, hidden, and output layers (Figure 2).

Input datasets (Xn) are transmitted through the network via the input layer. The input data are first assigned weights (Wnj) with a constant bias term, bi. The weighted input data are then transmitted to the hidden layer. In the hidden layer, the input to each neuron is transformed by an activation function. The number of hidden layers and their respective neurons can be efficiently determined using trial-and-error processes [30]. The most commonly used logistic sigmoid activation function, f(Im) [31] was applied in this study as presented in the following equation.where Im is the neuron input.
The signal from the hidden layer is then fed as input to the output layer where it is transformed by a linear activation function to obtain the final output, Yi (Equation 7).
If the output value (Yi) cannot be as desired, an error signal returns along the original connection path in the backpropagation process. The connected weights are then adjusted using a backpropagation optimisation algorithm until an output with the desired minimal error deviation is obtained.
3.3. Radial Basis Function Neural Network
RBFNN has been successfully applied in areas such as pattern recognition, system identification, nonlinear function approximation, adaptive control, speech recognition, and time-series prediction [32]. It combines learning vector quantizer and gradient descent [33, 34] and is made up of a three-layer feedforward structure. The first layer consists of the inputs of the network, the second is a hidden layer which consists of a radial basis function (RBF) as the activation function, and the third layer which is linear, corresponds to the final output layer of the network (Figure 3). The input layer receives the input (xi) data from the external environment and transmits them to the hidden layer without any weight connections. Each neuron of the hidden layer performs a nonlinear processing function using RBF. It is noteworthy that RBF values depend only on the distance from its centre, cj [7, 35]. The number and location of centres in the hidden layer influence the performance of the RBFNN directly. The input signal to the output layer is the weighted sum of the various outputs from the neurons of the hidden layer. This is then converted by a linear function in the output layer to produce the final output, f(x). The output of the RBF-NN can be defined as a linear combination of radial basis functions of the inputs and neuron parameters [30]. Mathematically, for the input variables (x1, x2, x3, …, xn), the RBF approximation is given by the function presented in the following equation [36].where b0 is the bias term, n is the number of hidden neurons, and Φ denotes the type of RBF used. This study used the Gaussian activation function given by .where σj denotes the width parameter, cj is the centre, and ‖xi − cN‖ is the computed Euclidean distance between xi and CN. Additionally, σj is directly proportional to the maximum Euclidean distance, dmax between the RBF centres. That is, for a nonnegative scalar k, .

3.4. Generalised Regression Neural Network
A Generalised Regression Neural Network (GRNN) consists of a four-layer multidimensional feed-forward system based on the nonlinear regression theory, which is able to fit multidimensional surfaces through data [37]. GRNN functions by measuring how far a given sample pattern is from patterns in the training set in N dimensional space, mostly by adopting Euclidean geometry. The network consists of the input layer, the pattern layer, the summation layer, and the output layer (Figure 4).

The GRNN input layer transmits data received from the external environment to the pattern layer. The input layer is connected to the pattern layer through the weights of the pattern layer (WP) where each unit denotes a training input pattern and its output is a measure of the distance of the input from the stored patterns. The pattern layer comprises of radial basis neurons representing the training patterns. The transfer function of these patterns is Gaussian which has a spreading factor. Each pattern layer unit is connected to the weights of the summation layer (WS) [38].
The summation layer has two types of processing units: summation unit (S) and a single division unit (D). The number of summation units is always equivalent to the number of output units. The division unit represents the sum of the weighted activations of the pattern units of the hidden layer, without considering any activation function. Each output unit is connected only to its corresponding summation unit and the division unit. The output layer yields the predicted value by taking the quotient of the output from the summation unit and the signal coming from the division unit as presented in (10) [39].where Y(x) is the prediction value of input x, yk is the activation weight for the pattern layer neuron at i, and K(x, xk) is the radial basis function kernel between input (x) and training samples, xi as expressed in the following equation.where is the Euclidean distance between training samples xi and the input x and denotes the spread parameter.
3.5. Proposed Hybrid Model Development
In this section, a flowchart of the proposed DWT-ANN hybrid model is presented in Figure 5. The DWT-ANN variant models explored and implemented in this study are the DWT-BPNN, DWT-RBFNN, and DWT-GRNN. The procedural frameworks employed to execute the modelling process are presented in the subsequent sections.

3.5.1. Data Decomposition Using Discrete Wavelet Transform
The object of decomposition is to stabilise the dataset since the LIBOR time series indicated significant variability and nonlinearity. By this, the DWT filtered and denoised the data and thus provides a suitable network architecture for the DWT-ANN model formulation. Thus, the technique smoothens the signal making it easy to detect any changes which may be present [23].
To prepare the dataset for the development of the proposed hybrid models (DWT-BPNN, DWT-GRNN, and DWT-RBFNN), the LIBOR time series data was first decomposed into low (a4) and high frequency (a4, d1, d2, d3, and d4) components (Figure 6). That is, Dubuachie level 4 (db4) was used to decompose the LIBOR time series data into various frequencies. This is because db4 has been the most successfully used in the literature to provide for effective time series prediction as compared with the other levels.

It is noteworthy that the db4 is based on four filters, including decomposition low-pass, decomposition high-pass, reconstruction low-pass, and reconstruction high-pass filter [40]. The db4 algorithm was applied because the low-pass filter, associated with the scaling function, allows the analysis of low frequency components, while the high-pass filter, associated with the wavelet function, allows the analysis of high frequency components. Detailed information of the db4 algorithm can be found in [41].
In this study, the method involves first, allowing the original signal (X1) to go through the filters to obtain the first level approximation (a1) and details (d1) as shown in Figure 6. The approximation is then allowed to undergo high and low pass filters to produce the second level approximation (a2) and detailed series (d2). This process is repeated to obtain the last and fourth level approximation (a4) and details (d4). The approximation holds the general trend of the original signal, while a detail depicts high-frequency components [42].
3.5.2. Data Partitioning
The 416 dataset was split into the percentage ratio of (80 : 20) for training and testing of the DWT-BPNN, DWT-RBFNN, and DWT-GRNN models, respectively. By this, 332 of the datasets were used for training and the remaining 84 were used independently to test and validate the developed models. In the splitting of the LIBOR dataset, the popularly used hold-out cross-validation method was employed. The method stipulates that to guarantee good predictions, the sample size of the training data must be larger than the testing data [43]. The dataset was thus divided in accordance with this rule to achieve the desired results.
3.5.3. Input-Output Parameter Selection
To prepare the dataset for the development of the proposed hybrid models (DWT-BPNN, DWT-GRNN, and DWT-RBFNN), the LIBOR time series data was initially decomposed into low (a4) and high frequency (a4, d1, d2, d3, and d4) components using Dubuachie level 4 (db4). For each of the low-frequency component, a4i and high-frequency component d1i, d2i, d3i, d4i where i = 1, 2, 3, …, N, the BPNN, RBFNN, and GRNN models were applied to predict each frequency component. It is important to restate that 332 of the datasets were used for training and the remaining 84 were used independently to test and validate the developed models. The input and output variables used for the variant DWT-ANN models are shown in Table 3.
3.5.4. DWT-BPNN Hybrid Model for LIBOR Prediction
In this study, one hidden layer of the BPNN was utilised. This was appropriate because it has been proven that a BPNN with one hidden layer is sufficient as a universal function approximator of any complex system [44]. The hyperbolic tangent transfer function was utilised in the hidden layer.
while the linear activation function was used in the output layer. The network was trained at a learning rate of 0.03 for 5000 epochs using the Levenberg–Marquardt backpropagation algorithm [45]. To develop the DWT-BPNN model, the steps involved in the training stage include the following:(i)The decomposed LIBOR data (a4, d1, d2, d3, and d4) are individually fed into the network via the input layer where each data received is assigned a weight. The input signals are multiplied by their weights and products transmitted to the hidden layer where they are transformed by a hyperbolic tangent function for the output layer. This process predicted each of the low and high frequency components.(ii)The output is compared with the expected LIBOR values and the error propagated backward to the network for adjustment.(iii)Adjusted weights are fed back into the network and the process repeats until the expected minimum error is obtained.
The resulting data was reconstructed by summing the individual components predicted. The BPNN optimal training and testing results for each frequency component were achieved based on R and RMSE criteria (Table 4).
3.5.5. DWT-RBFNN Hybrid Model for LIBOR Prediction
The development of the DWT-RBFNN for predicting LIBORs involved determining the optimal number of hidden neurons, Gaussian centers, and width of the RBFs as well as weights for the output layer [46]. The decomposed (a4, d1, d2, d3, and d4) LIBOR data were trained using the gradient descent learning algorithm in which the weights are partly adapted to the deviation between the predicted and target outputs. The variable parameters that affect the training of the RBFNN are the width parameter and the maximum number of neurons in the hidden layer. To choose the optimum RBFNN architecture, the width value and the maximum number of neurons in the hidden layer that produced the least RMSE and the largest R in both training and testing data sets were selected. The RBFNN training can be divided into two phases:(i)Determining the parameters of the radial basis functions, i.e., Gaussian centre and width (k) to be used; and(ii)Determining the output weight by supervised learning method using Least-Mean Square (LMS).
In this study, the respective width parameter values and the number of hidden layer neurons that produce the highest R and least RMSE results for optimum model performance in training and testing are presented in Table 5.
3.5.6. DWT-GRNN Hybrid Model for LIBOR Prediction
In this study, a spread value of 0.1 to 1 with a step size of 0.01 were investigated and the value that gave the best R and lowest MSE for both training and testing LIBOR data was chosen as the optimal model. The Gaussian kernel was used in the GRNN algorithm. Table 6 presents the WT-GRNN optimal training and testing results for a4, d1, d2, d3 and d4 components.
3.5.7. Model Evaluation and Selection Criteria
The performance indicators employed for analysing the accuracy of the hybrid DWT-BPNN, DWT-GRNN, and DWT-RBFNN models are the correlation coefficient (R), root mean square error (RMSE), coefficient of determination (R2), mean absolute deviation (MAD), mean absolute percentage error (MAPE), and Variance Accounted For (VAF) [43, 47]. The scatter index (SI) and BIAS were also used as additional statistical tools to appraise the performance of each developed hybrid model [48]. Their respective mathematical expressions are indicated in equations (12) and (19). The Akaike information criterion (AIC) (equation (20)) was used as a model selection criterion.where n is the number of data set, Xi is the actual observation of LIBOR time series, Yi is the estimated LIBOR time series, is the mean value of Xi, is the mean value of Yi, k is the number of estimated parameters in the model, and RSS is the residual sum of squares.
4. Results and Discussion
4.1. Discrete Wavelet Transform Results
The Daubechie wavelet of level four (db4) was successfully applied to obtain the wavelet coefficients by the process of decomposition. This was done using MATLAB program. Figure 7 illustrates the results of the decomposition after applying the DWT to the original LIBOR time series data.

In Figure 7, a4 represents the low frequency component, d1, d2, d3, d4 are the high frequency components, and s is the original LIBOR data. Subsequently, the ANN (BPNN, RBFNN, and GRNN) models were used to predict each of the low and high frequency components. The data was reconstructed by summing the individual components predicted and comparing with the original LIBOR time series data for further statistical analysis and interpretation.
4.2. Interpretation of the Hybrid Model Results
Generally, it can be observed from the statistical results in Table 6 that the performance indicators of the various DWT-ANN models (DWT-BPNN, DWT-RBFNN, and DWT-GRNN) showed good predictions close to the actual LIBOR data points. This is because all models had R values greater than 96%. However, it can be observed that the LIBOR values predicted by the DWT-RBFNN model correlated better with the actual LIBOR values than the other competing models by achieving the highest R value of 99.95%. Further comparison indicates that DWT-GRNN performance deviated by approximately 0.011, 0.167, and 0.1523 from the DWT-RBFNN model when taking into account R, RMSE, and MAD. Compared with DWT-BPNN, the deviations from the DWT-RBFNN performance stood at 0.0261, 0.2275, and 0.1652 with respect to R, RMSE, and MAD, respectively. Moreover, the DWT-RBFNN model was able to account for 99.90% of the variation of the predicted LIBOR values, which was higher than that accounted for by the DWT-BPNN (94.75%) and DWT-GRNN (97.71%). This assertion can be confirmed from the R2 results (Table 7). Again, the DWT-RBFNN gave the least SI and BIAS values of 0.0219 and 0.0053, respectively, compared to the other models.
In addition to the display of the statistical evaluators, attention was paid to the forecasting effect by adding graphs (Figure 8) to show more evidence on the individual model performance. The interpretation here is that the best model will produce predictions that have very closely related values that represent the actual. It can be seen that the DWT-RBFNN outperformed the other models (DWT-BPNN and DWT-GRNN). Further observations were made by graphically analysing the errors for the various models (Figure 9). From these analyses, it can be seen that the deviations in the DWT-RBFNN model are more consistent and minimal relative to the other investigated techniques. Although all hybrid techniques performed well, it is evident that the DWT-RBFNN has indicated to present more superior results.


4.3. Uncertainty Analysis
With the aim of checking the expected range in which the actual LIBOR values predicted by each hybrid model lie, uncertainty analysis was conducted. It can be estimated based on the errors calculated for the measurement process of the tests under consideration. In this study, the U95 analysis was used to calculate [(21) the uncertainty interval [49].where X1 is the observed LIBOR value, X2 is the predicted value, and is the mean of the predicted values for n dataset. The smaller the value of U95, the more accurate the model’s predicted LIBOR value. The interpretation is that if the process is repeated several times, the true value of the output of the test will lie in the presented uncertainty interval for approximately 95 times out of each 100 trials. In this study, the U95 values of the DWT-BPNN and DWT-GRNN were 0.1856 and 0.1856, respectively. The DWT-RFBNN presented the least uncertainty value of 0.1767 hence superior to the DWT-BPNN and DWT-GRNN models.
4.4. Model Selection
The calculated AIC values for the various DWT-ANN hybrid models for predicting LIBOR values are presented in Table 8.
From Table 7, it can be observed that DWT-RBFNN achieved the lowest AIC value (-314.66). This was followed by DWT-GRNN (-17.45) and DWT-BPNN (25.19). On the basis of the AIC model selection criterion, model with the lowest AIC value is selected the best among other competing models. By virtue of that, the DWT-RBFNN is selected as the best performing model than the DWT-GRNN and DWT-BPNN for the LIBBOR prediction. Unlike the BPNN and GRNN, which can be quite sensitive to noisy data, RBFNN demonstrates strong tolerance to such data [50, 51]. It is demonstrated in this study that RBFNN has a better ability for nonlinear fitting and has shown better generalisation ability than the other methods investigated.
5. Conclusion
In this study, a hybrid DWT-ANN model to predict LIBOR has been successfully applied. The hybrid model combines the capability of both wavelet analysis and artificial neural networks to capture non-stationary and nonlinear characteristics embedded in the LIBOR time series data. To achieve the set objective, three different hybrid models including DWT-BPNN, DWT-RBFNN, and DWT-GRNN were developed and implemented. The results indicate that these hybrid models have the capability of handling the nonlinear LIBOR time series data. The reason was that the performance indicators (R, RMSE, R2, MAD, MAPE VAF, BI, and BIAS) of all models were competitive. However, the DWT-RBFNN model was selected as the best approach because it achieved the lowest RMSE, MAD, and MAPE values of 0.0376, 0.0250, and 2.1439%; and the highest values of R2 and VAF of 0.9990 and 99.84, respectively. The DWT-RBFNN best performing model claim was confirmed by using the AIC criterion for model selection, of which it achieved the lowest AIC value of -314.66 as compared with the other competing models. In conclusion, the DWT-RBFNN model offers an alternative approach for reliably predicting future LIBOR values that will inform a realistic modelling of the borrowing rates for funding mineral projects. By obtaining a realistic borrowing rate, mining firms can reliably establish a minimum rate of return for evaluating the economic viability of mineral projects. This is because the minimum rate of return (discount rate) offers the mineral investor one major needed input parameter for evaluating the NPV of mineral projects to ascertain their profitability or otherwise.
Data Availability
The data used to support the findings of this study are available from Macrotrends LLC at https://www.macrotrends.net/1433/historical-libor-rates-chart.
Conflicts of Interest
The authors declare no conflicts of interest.
Authors’ Contributions
Richard Gyebuni was involved in conceptualization, methodology, data collection and cleaning, model development and implementation, formal analysis, and writing; Yao Y. Ziggah carried out data preparation, methodology, investigation, visualisation, supervision, and writing; Daniel Mireku-Gyimah supervised the study.