Research Article

Compressed Wavelet Tensor Attention Capsule Network

Figure 2

(a) The architecture of compressed tensor self-attention block. The compressed tensor self-attention block involves the mode embedding via 1 × 1 convolution to realize mode matrization, and Nyström-based self-attention modules. (b) The architecture of Nyström-based self-attention module. Multidirectional interdependencies can be captured by the combination of both mode matrization and self-attention modules. denotes the element-wise addition operator for matrices.
(a)
(b)