Attention-Based Image-to-Video Translation for Synthesizing Facial Expression Using GAN
Table 1
List of experimental classes conducted.
Notation
Experiment
Dataset used
AffineGAN
Baseline work
MUG and local facial expression dataset
MAEGAN
Expression intensity inferred as stated, boundary image used as an informative region for the local discriminator and MAE loss used to train the generator and the discriminators
MUG and local facial expression dataset
MAEGAN + SA
With the addition of the self-attention layer and with local discriminator