Research Article

[Retracted] Research on the Construction of a Bidirectional Neural Network Machine Translation Model Fused with Attention Mechanism

Table 5

Hyperparameter design and specific meaning.

Parameter nameNumerical sizeSpecific meaning

batch_size64Minimum batch size
lr0.0001Learning rate
logdir“-path” pathPath to save model parameters
maxlen20Maximum length of a sentence
min_cut20Minimum word frequency, use if it is less than
hidden_unit512Size of hidden layers and word embeddings
num_blocks6Number of stacked blocks to encode/decode
num_epoch20Rounds of iteration over the entire dataset
num_heads8Divide into several heads to calculate attention
dropout_rate0.1Drop-out rate
SinusoidFalseWhether to use learned positional coding