Research Article

Semi-supervised Learning for Automatic Modulation Recognition Using Haar Time–Frequency Mask and Positional–Spatial Attention

Figure 4

Specific structure of proposed attention mechanism PSA. Our network consists of extractor and classifier. Extractor is a stack of five convolutional layers. PSA can adapt itself to the signal strip shape of the STFT spectrogram. PSA performs positional step and spatial step in sequence to address the limited receptive field of convolutional layer.