| Neural network name | Layer (operation) | Input format | Output format |
| Generator network | Fully connected layer | [None,128] | [None,256] | Batch normalization (batch_norm) | [None,256] | [None,256] | Nonlinear activation (ReLU) | [None,256] | [None,256] | Deconvolution layer 1 (conv2d_transpose) | [None,2,2,64] | [None,4,4,16] | Batch normalization (batch_norm) | [None,4,4,16] | [None,4,4,16] | Nonlinear activation (ReLU) | [None,4,4,16] | [None,4,4,16] | Deconvolution layer 2 (conv2d_transpose) | [None,4,4,16] | [None,8,8,1] | Nonlinear activation (tanh) | [None,8,8,1] | [None,8,8,1] |
| Discriminator network | Convolutional layer 1 (conv2d) | [None,8,8,1] | [None,4,4,32] | Batch normalization (batch_norm) | [None,4,4,32] | [None,4,4,32] | Nonlinear activation (LeakyReLU) | [None,4,4,32] | [None,4,4,32] | Convolutional layer 2 (conv2d) | [None,4,4,32] | [None,2,2,64] | Batch normalization (batch_norm) | [None,2,2,64] | [None,2,2,64] | Nonlinear activation (LeakyReLU) | [None,2,2,64] | [None,2,2,64] | Fully connected layer | [None,256] | [None,7] | Softmax | [None,7] | [None,7] |
|
|