Abstract
In recent years, there has been increasing interest in exploring diversified features to measure small and medium-sized enterprises (SMEs) credit risk. Path-based features, revealing logical connections between SMEs, are widely adopted as informative feature kinds for causal inference in credit risk evaluation. Since there may exist thousands of feature paths to the target enterprise, to evaluate its credit risk, how to select the most informative path-based features becomes a challenging problem. To solve the problem, in this paper, we propose a novel method of feature selection, considering both similarity and importance on features’ structured semantics as the factors of informativeness. With this, the proposed method can effectively rank both conventional and path-based features together. Furthermore, to improve the efficiency of the method, a heuristic algorithm is proposed to fast search for the candidate features. Through extensive experiments, we show our method performs competitively with other state-of-the-art selection methods.
1. Introduction
Small and medium-sized enterprise (SME) is an essential part of the national economy, whose development directly affects the growth of the country economy. In recent years, how to accurately assess the credit risk of SMEs attracts great attention from academy and industry. The most adopted approach is to evaluate the risk by incorporating various financial SME features to predict whether potential risks exist, based on some statistical methods. Among various kinds of features, conventional feature and path-based feature are two feature types commonly used in the process of evaluation.
Conventional features refer to unstructured and independent financial features, which reflect the basic information of enterprises. For example, the common conventional features include enterprise solvency, employee size, and business duration. Path-based features indicate well-structured and interdependent financial features, which describe the external influences to enterprises through specified relationships. For example, in Figure 1, path 1 is a path feature representing there is a parent-subsidiary relation between Walmart and Sams CLUB.

Conventional features mainly focus on describing enterprises’ self-related information, which may be a bit ineffective to evaluate the credit risk in today’s financial environment. The reason is that, with the expansion of the global market size, SMEs usually have a large amount of complicated relations with other SMEs, and their financial status can be easily affected by their related SMEs, which makes simple self-related features lose their effectiveness. For example, an SME may still have potentially high risk even it is in good financial conditions since the contagion risk may come from its associated enterprises, such as its parent enterprises. Therefore, compared to self-related information, interaction information between SMEs should be paid more attention in studying SME credit risk. Path-based feature is proposed to model such interactions in the information networks [1]. To not lose important information, heterogeneous information networks [2] are often used to model SME complicated relations with graph data structure. In the network, every specified relation between two enterprises can be represented as one graph path, whose semantic information can be explicitly captured from the data structure. For example, in Figure 1, Path 2 represents the information that Truenorth’s founder is also the board member of Walmart. If Truenorth is in financial crisis, then it may affect the financial status of Walmart. In this way, complicated relations between SMEs can be systematically and concisely defined in graph paths.
Even though path-based features demonstrate the advantage on evaluating credit risk, in SME information networks, there may exist numerous paths to an enterprise, some of which may carry useless information for evaluation. Thus, how to select the most informative features becomes a challenging problem. Unfortunately, most existing methods of feature selection may not apply well for path-based features since they are originally designed for conventional features which never consider the structure semantics of features. If these methods are used for path-based features, many features with similar structured semantics will be retained which makes the candidate feature set focus too much on limited information. Therefore, in this paper, we propose a novel feature selection method, considering both importance and similarity on features’ structured semantics as the factors of informativeness. First, we measure a feature’s importance based on its classification performance using some supervised classifier. The features contributing greatly to classify default SMEs are regarded as important features. Next, besides the importance, the similarity between candidate features is taken as another essential factor to consider in our selection method. To keep selected features unique and diversified, we introduce two kinds of measures to evaluate similarity between features, for the purpose of reducing feature redundancy. One measure focuses on the similarity of classification result, and the other focuses on the similarity of path structure. At last, to improve the efficiency of the proposed method, a heuristic selection algorithm is used to accelerate the selection process. Both theory and practice show the algorithm can greatly speed up the selection process and achieve satisfied selection results.
In the rest of this paper, Section 2 introduces the SME credit risk evaluation methods and the state-of-the-art feature selection methods; Section 3, gives the basic information of information network and the commonly used path-based features. In section 4, we propose a novel feature selection method and introduce a heuristic algorithm to accelerate the selection process. Section 5 presents the experiment and analysis of the experimental results, and Section 6 concludes the paper.
2. Related Work
In the 1960s, Altman [3] used a set of financial features to evaluate enterprise credit risk. Since then, many researchers have focused on using financial features to evaluate SME credit risk. For example, Cultrera [4] used the current ratio, total asset turnover rate, and ten more financial ratios to evaluate SME credit risk. Gupta [5] investigated the effectiveness of operating cash flow for UK SMEs. The financial features can provide meaningful SME situations. However, due to the imperfect internal system of enterprises, the financial statements of many SMEs may be unaudited and unreliable. Thus, many researchers start to add nonfinancial features to the evaluation system such as enterprise age [6], industrial sector [7], the ability of enterprise managers [8], and enterprise management structure. Tsai [9] used enterprise news information on the credit risk of SMEs. Yin [10] used SME legal judgment information with financial and firm nonfinancial features to evaluate credit risk. With the development of data mining strategy, data related to enterprises have been accumulated such as the upstream and downstream enterprise information and the parents or subsidiary enterprise information. Numerous relationships between different entities have also provided researchers with new ideas to find SME credit risk factors. Several researchers use information networks to extract SME-related features. For example, Moro [11] takes the impact of SMEs and bank manager trust relationship on enterprise credit risk into consideration. Tobback [12] collects interenterprise relationship data to measure SME credit risk. Kou [13] collects enterprise manager, shareholder, and payment information and builds three information networks to extract evaluation features. However, due to the complicated relationships between SMEs and their associated entities, some essential information may be lost by only considering homogeneous relations. Therefore, many researchers extent the object and relation types between SMEs and their associated entities. Du [14] collects enterprise, person, commodity, and news information of SMEs and builds an information network of SMEs to measure credit risk. Zhong [15] collects enterprise, investor, enterprise category, and enterprise location and builds an information network to make investment behavior prediction. Extracting enterprise-related information through information networks dramatically increases the number of features used to measure enterprise credit risk.
Feature subset generation methods can be divided into three categories. The first one refers to complete search strategy [16], which determines feature subset by finding all combination possibilities. The second one refers to the heuristic search strategy [17], which evaluates each search location to get the best one and then searches from this location until reaching the goal. This method avoids a large number of unnecessary search paths, reduces the amount of calculation, and improves efficiency. The third one refers to the random search strategy [18], which randomly generates a number of feature subsets and then evaluates these feature subsets. Feature subset evaluation method mainly includes two types: class relevance and remove redundancy. Most feature subset evaluation methods can find the most relevant features effectively. For example, the Relief [19] and ReliefF [20] algorithms. However, it is unable to remove redundant features. Therefore, many feature selection algorithms are proposed, such as the mRMR algorithm [21], and information theory is applied to measure both class relevance and pairwise correlation between features. The FCBF [22] applies symmetrical uncertainty to measure both class relevance and pairwise correlation between features. Furthermore, the relationship between features is complex. Some feature subset evaluations consider class relevance, feature redundancy, and complementarity. The RCDFS [23] extends the traditional redundancy analysis to redundancy-complementariness analysis other than the class relevance and redundancy measures. The self-adaptive feature evaluation (SAFE) [24] algorithm applies the complement strategy in the process of searching and proposes an adaptive cost function to penalize redundancy and reward complementary. This paper proposes a feature selection algorithm that considers class relevance, feature redundancy, and feature structures and semantics.
3. Preliminary
Information network is a classical data structure used to model objects and relations in a directed graph. Given different objects in information networks, logical connections can be effectively constructed, and semantic relationships can be easily captured.
Definition 1. An information network defined as a directed graph with object type function and relation type function , where object belongs to object type and link belongs to relation type .
Figure 2 is an example of information network for enterprise .
In this network, it contains four object types : enterprise , commodity , person , and news . And, eight relation types : , , , , , , , and . From the graph, objects , , , , and are enterprise, that we have , the same as are. Objects is commodities, that we have . Objects is news, that we have . Objects and are persons, that we have , the same as . and are the relation of subsidiary, that we have , the same as . is the relation of supplier, that we have . , , and are the relation of founder, that we have , the same as , are. is the relation of board member, that we have . is the relation of son, that we have . is the relation of reports, that we have . is the relation of produce, that we have . is the relation of sale, that we have .

Definition 2. The network schema is a metalevel representation for with object type function and relation type function , which is a directed graph over object types and edges as relations from .
Figure 3 shows the corresponding network schema of Figure 2.

Definition 3. With a schema , a path in the form which defines a composite relation between and , where denotes the composition operator on relations. For simplicity, we use the names of object types and relation types denoting the path: .
From the above definitions, some commonly used path-based features are given:(1)Common-neighbors Feature [25]: common-neighbors feature is defined as the number of common neighbors shared by two objects and , namely, , where is the notation for neighbor set of the object and denotes the size of a set.(2)Path-count feature [26]: path-count feature is defined as the number of path instances between two objects and following a given metapath , denoted as .(3)Naive-MP feature [14]: Naive-MP feature is defined as the impact of meta path on target object, denoted as , where is an SME object collection, is a path instance from object to object , and is the risk inference function defined in [14].In Figure 2, we can see that has 2 paths in the form , which are and . To illustrate path-based features, we take path-count feature as example. When evaluating the credit risk of , we can have its path-count feature on the path equals to 2, which means that the enterprise totally has 2 subsidiaries.
4. Methods
In this section, a method is proposed to find the top-k informative features from the pool of candidate features. Regarding candidate features have high importance on predicting default SME and low similarity on classification result and path structure, as the informative ones. The measurement of importance and similarity will be detailed, respectively, in Section 4.1 and Section 4.2. The final set of top-k features will be selected in Section 4.3.
4.1. The Importance of Features
An important feature is a feature that has a significant impact on determining whether an enterprise is default. It helps direct our model to learn and predict correctly. In this paper, we measure a feature’s importance based on its classification performance using some supervised model. Based on the classification result from the supervised model, we can evaluate the given feature in different measures such as accuracy, precision, recall, and . Specifically for the SME default problem, the datasets are usually highly imbalanced, where the number of default enterprises is much less than the number of nondefault enterprises. In order to correctly find default enterprises as many as possible, we select as the importance measure which can balance the effect of both precision and recall. For simplicity, the logistic regression model [27] is used as the supervised model in this paper. The definition of measure is given as follows.
Definition 4. where is an enterprise in the dataset , is the actual status of , is the predicted status of , means is default, and means is nondefault.
The value of measure is used as the score of the feature importance. In the rest of this paper, we denote the importance score of feature as .
4.2. The Similarity between Features
Besides the importance of features, the similarity between features is another essential factor to consider in the process of feature selection. Similar features may bring redundancy to the selection result, making the selected features focus too much on limited information. With the redundant features, the learned model may lose its generalization ability on classification. In order to keep the model effective, we expect the selected features as mutually different as possible. In the next, we introduce two measures to evaluate the similarity between features. The first one is based on the consistency of classification results. The second one is based on the matching of path structure.
4.2.1. Similarity on Classification Result
The importance measure evaluates each feature based on its individual classification performance. However, it is possible that two features have the same importance score but different predictions on some data examples. The difference measures how far two features can come to an agreement on the status of an enterprise. The less the difference, the less the similarity of the views shared by those features. Thus, the consistency of features’ classification results can be treated as a similarity measure. In this paper, the consistency between features is computed through the classification result learned from the supervised model, which is similar to the process of computing feature importance. That is, we use each feature to train a logistic regression model to classify default SMEs, and the consistency of results is taken as the similarity between features. We formally define the mentioned consistency similarity as follows.
Definition 5. where is an enterprise in the dataset and and are the predicted status of by the supervised model learnt respectively from feature and feature .
According to the definition, is exactly the similarity between the features on their classification results.
4.2.2. Similarity on Path Structure
In the above, the consistency of classification is used to measure the similarity between features. However, this measure is a bit biased as its result may vary with different business backgrounds. For instance, when studying SMEs of conventional retail, we may see that the similarity between the feature of product quality and the feature of marketing director capability is relatively high, and both of them are essential factors in default prediction; conversely, when studying SMEs of online retail, we may see that the similarity between those two features may decrease since e-commerce enterprises usually are significantly product-driven rather than marketing-driven. In order to alleviate such bias, we hereby introduce another measure to evaluate feature similarity from the perspective of semantics, which is naturally independent of business backgrounds. We regard the similarity of path structure as the exact similarity of the features semantics. The high diversity of paths improves the compatibility and the robustness of the learned model. Mathematically, we use Levenshtein distance [28] to measure the similarity between paths. The distance is the least step in changing a path to another path. We denote the mentioned similarity as , and the definition is given as follows:
Definition 6. where and are the path structures of feature and feature , and are the path lengths of and , and is Levenshtein distance between the two features.
For example, according to our method, the path structure to the feature of one enterprise’s marketing director capability is and to the feature of one enterprise’s product quality is . Computing the distance between the two path structures is actually to compute Levenshtein distance between the two path structures. With the result distance 2, we can have the similarity on path structure between the two features is 0.33.
4.3. The Proposed Feature Selection Algorithm
With the measures of importance and similarity, in this section, we give an algorithm to find the top-k informative features. Each feature we select should have a high importance score and low similarity scores with other features. That is to say, the final feature set we select should have maximum total importance score and minimum total similarity score among all the possible feature combinations from the candidate feature pool. The mathematical goal can be presented as follows:where is the pool of all candidates features with size , is the result set of selected features with size , and and are two weight parameters of and with features and .
It is obvious that exhaustive searching is inappropriate to solve above problem, whose time complexity is . When the number of features is large, the process of searching is significantly time-consuming. Usually, greedy searching algorithms are applied on this problem. However, for naive greedy algorithm, as long as one feature is not selected into the result set, its similarity with other features already selected will be calculated repeatedly at each iteration. Such computation on similarity is wasteful. Therefore, we propose an upgraded version, a greedy-search feature selection (GSFS) algorithm (in Algorithm 1), to find the result set. Our proposed algorithm is a practical greedy algorithm with the time complexity of .
|
The proposed algorithm always can find the local optimal solution in the process of feature selection. The proof and analysis are given in the rest of this section.
Theorem 1. Through the searching algorithm 1, the local optimal solution to (4) can be always found.
Proof. As a greedy searching algorithm always looks for local optimal solution based on its previous result, it indicates that when a new feature is selected, and the previous selected features are kept. Then, at the -th iteration, there must exist , and the objective of greedy can be rewritten asAs the first part of the objective is the result achieved at the -th iteration, it becomes constant at the -th iteration. Therefore, maximizing the objective in (5) is to maximize its second part:With notations in the Algorithm 1, maximizing the second part is equal to maximize the following:In Algorithm 1, with the selected feature at each iteration, the algorithm iteratively updates of each in the current candidate feature set with . It can be obviously seen that, for not yet selected, at the 1-st iteration. At the 2-nd iteration, and at the -th iteration . Therefore, in Algorithm 1, we can haveSelecting , the feature of the maximum at each iteration, is equivalent to selecting the feature that satisfies the objective in (6). The theorem proves.
5. Experiments
In this section, we are going to investigate the effectiveness of our proposed method. We conduct experiments on three real-world datasets. The result and explanation will be detailed in this section.
5.1. Experimental Settings
In our experiments, three datasets are used for comparison. SMB1 dataset provides the information of traditional small and medium-sized enterprises. GEM2 and STAR3 datasets give the statistics about high technology enterprises. All the datasets can be downloaded from CSMAR4. 48 frequently used conventional features, and 4548 path-based features are used for feature selection. The statistics of datasets is shown in Table 1.
All the experiments were implemented in Python 2.7.17 on Win with CPU processor and RAM.
5.2. Performance of Feature Selection
In this section, we compare our proposed method with five state-of-the-art selection methods for ranking the most informative features. For our method, for different datasets, and are configured according to the settings in Section 5.3, respectively. The details of the other five selection methods are introduced as follows: mRMR [21]: a very famous feature selection algorithm that applies mutual information (MI) metrics to measure feature-class relevance and pairwise correlation between features FCBF [22]: it first applies symmetrical uncertainty (SU) as a metric to measure feature-class relevance and then uses an approximate Markov blanket to check redundant features mIMR [29]: it considers feature-class relevance and the net effect of redundancy and complementarity, using joint mutual information RCDFS [23]: it not only considers feature-class relevance and pairwise correlation between features, but also takes into account the effect of redundancy-complementariness dispersion FS-RRC [30]: first applies symmetrical uncertainty (SU) as a metric to measure feature-class relevance and then uses an approximate Markov blanket to check redundant features, and finally the complementary score between features based on both SU score and MI
All comparisons are conducted on the mentioned three datasets. To compare mentioned methods, 10-fold cross-validation associated with the logistic regression is used to evaluate their performance. Specifically, we divide the datasets into ten folds, using nine folds for training and one for testing. Then we repeat the cross-validation 20 times, calculating the classification accuracy and AUC of each mentioned method. In order to compare feature selection methods comprehensively, we, respectively, do experiments with , , and , where represents the number of features to select. The comparison results are summarized in Figures 4–6 and Tables 2 and 3.



From the above results, we can see that, in most cases, our proposed feature selection method has better performance than other five selection methods. Although the other five methods also remove similar features using different similarity measures, none of them consider the similarity of feature semantics, making their results not as concise as ours. For example, in the dataset GEM, path feature and path feature are both selected by all other five methods; however, our method only picks path feature and ignores path feature since path feature has a high semantic similarity with path feature. With capturing the similarity of feature semantics, the feature redundancy of our result is lower than that of other result. 30% features selected by those methods are highly similar with path-based similarity scores larger than 0.7, but only 8% features of ours have that large similarity scores.
In Table 2, for SMB dataset, it is interesting to see that most methods have similar AUC scores in the setting , but when or , our method outperforms the other five methods. The reason is that, for some complex dataset like SMB, when only 20 features can be selected, all methods perform similarly poor without enough features for classification, but when 40 or 80 features can be selected, the methods have enough quota to demonstrate different mechanics to pick features and achieve different performance. The main difference between the results of compared methods comes from their different similarity measures to filter redundant features. In the setting , we can see that the other five methods finally have 55 features in common, but our method only have 20 same features with them. As the compared methods are not originally designed for path-based features, it is not strange that they select many similar path-based features. But for our method, by considering the semantic similarity of path-based features, we can efficiently eliminate the redundancy of selected features, making our method hold an 2.52% AUC lead over other methods in SMB dataset.
5.3. Combination of Parameters
In this section, for our method, we will run experiments to compare the effects of different parameter combinations. Our proposed method mainly has two key parameters, and , which need to be carefully determined. controls the weight of the classification similarity, and controls the weight of the path-structure similarity. Table 4 shows the classification accuracy of our method with different parameter combinations in the three datasets.
From the table, it can be observed that, for SMB dataset, the setting performs best; for GEM dataset, the setting performs best; for STAR dataset, the setting performs best. It is interesting that, for different datasets, the optimal parameter combinations differ greatly. The reason may be that the complexity of SME relations in the three datasets is in different level. To the dataset STAR, as there exist only 2157 possible path patterns and most of which are simple and short, the path-structure similarity does not play a big role in reducing redundancy. However, to dataset SMB and GEM, as more complicated path patterns are contained in the datasets, it becomes necessary to exploit the path-structure similarity to filtering redundant features. Therefore, in our experiments, different parameter combinations of and are set, respectively, for the different datasets.
5.4. Efficiency Analysis
In this section, efficiency experiment is conducted to show our method can perform rapidly. To compare efficiency, we run all the methods on the three datasets and record the running time of finding features. From Figures 7–9, it can be obviously seen that our method runs fastest among all the methods on the three datasets. Take experiments on the dataset GEM as illustration. When , our method outperforms other methods with 20 ms at least; when , our method outperforms others with 417 ms at least; and when , our method outperforms others with 4928 ms at least. It is easy to see that, with increasing larger, the difference of performance between our method and others becomes greater as well. The reason is that the other five methods run to select features in an exhaustive way, whose time complexity grows exponentially with the value of ; however, our method presented in Algorithm 1 runs to select features in a heuristic way, whose time complexity grows linearly with the value of . Therefore, in practice, we can clearly find that the efficiency of our method far exceeds those of other methods in general. Overall, the results shown in Sections 5.2 and 5.3 demonstrate that compared to the other methods, our method has the capability to find features of higher quality with higher efficiency.



6. Conclusion
In this paper, we propose a novel method of feature selection, considering both importance and similarity. We first measure the importance of features based on their performance on identifying default SMEs. Then, the similarity of classification performance and the similarity of structure semantics are considered to reduce the redundancy of selected features. To improve the efficiency of our method, we also introduce a heuristic algorithm to accelerate the selection process. At last, empirical results demonstrate that our proposed method outperforms other state-of-the-art methods in feature quality and algorithm efficiency.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflicts of interest.
Acknowledgments
This work was supported by the Project of Science and Technology Research and Development of China State Railway Group Co., Ltd. under Grant K2020Z002.