|
Symbol | Descriptions |
|
| Tokenized words, symbols, or numbers |
| Log event consisting of a sequence of tokens |
| Add [CLS] and [SEP] tagged log events |
| A chronological sequence of log events |
| A training set consisting of sequences of log events |
| A directed graph that represents the system functional path |
| A set of all nodes in the system functional path map |
| A set of all edges in the system functional path map |
| A collection of node semantic features |
| Node degree matrix |
| The th node in the node set |
| Edge between nodes and |
| Time difference between adjacent events in the event sequence |
| Time-difference series representation |
| The semantic representation of the th token in the sequence |
| Semantic representation of the additional word [CLS] |
| Semantic representation of the special addition word [SIGN] |
| Matrix of , , and in the attention mechanism |
| Spatial feature representation of the system functional path map |
| Log temporal features representation |
| Semantic representation of event sequences |
| Complete decoder input |
| Semantic feature mapping of event sequences |
| Event sequence decoder output representation |
| The decoder output representation of the special word [SIGN] |
| Full encoder output containing the special word [SIGN] |
| Feature representation, incorporating encoder output, spatial features, and temporal features |
| The hidden representation of the th layer graph convolution |
| The hidden representation of the layer encoder |
| The hidden representation of the layer decoder |
| Loss functions, including and |
| Center of all decoder outputs for the special word [SIGN] in the training set |
| Log event sequence label |
| Abnormal decision boundary |
|