| Input: The original dataset |
| Input: The subset sample rate |
| Input: The feature selection threshold |
| Input: The number of LightGBMs |
| Input: The number of epochs |
| Output: The classification results |
| Step 1: Pre-process the original dataset |
| Perform Min-Max normalization: ; |
| Step 2: Calculate feature importance |
| Initialize the feature importance: ; |
| Fortodo: |
| Construct subset according to sample rate: ; |
| Train LightGBM based on the subset: ; |
| Accumulate feature importance: ; |
| End |
| Step 3: Select features by threshold |
| Sort feature importance in descending: ; |
| Select features according to threshold: ; |
| Step 4: Construct DNN classifier |
| Separate features into categorical and numerical features: ; |
| Map categorical features to dense vector: ; |
| Concatenate embedding and numerical features: ; |
| Define deep neural network: ; |
| Step 5: Train deep neural network |
| Fortodo: |
| Predict : ; |
| Calculate cross-entropy loss: ; |
| Optimize model by Adam optimizer; |
| End |
| Return |