Prepare: CHD dataset, after data preprocessing, . Construct the population graph and K-nearest neighbor graph for the feature by (3) and (4), respectively. Normalize the graph matrix. One-hot encoding of . Then, randomly shuffle the order and divide the dataset into training set and test set according to the ratio of 6 : 4.
(2)
Input: training set ,, N is the number of nodes, and D is the feature dimension.
(3)
initialization: Initialize network model training parameters W, b, dropout = 0.5, early-stopping = 40
(4)
for epoch = 1:epochs do:
(5)
While (the model is not converging) or (Convergence rounds < early-stopping) do:
(6)
Input-specific feature X and topology graph A on different GCN channels in the model. The output of channel 1 is , the output of channel 2 is , and the output of channel 3 is .
(7)
The obtained graph embedding outputs ,, calculate their respective attention weights and do a product operation with the graph embedding to get the attention layer input .
(8)
The output is obtained through the linear classification layer, where the main operation of the MLP layer is to convert the space vector into a probability output through the softmax activation function.
(9)
Calculate the cross-entropy error , and update the network parameters W, b by gradient descent.
(10)
end while
(11)
end for
(12)
Output: Adaptive multi-channel graph convolutional neural network model