| Step 1: input training set as x_train, targets as y_train |
| Step 2: assign the hyperparameters as embedding_dimension = 50, number_filters = 64, vocabulary_size = 15000 words, input_length = 400, dropout_rate = 0.4, strides = 5, activation_function = Relu, kernel_size = 3 3, pool_size = 5 , lstm_units = 50, batch_size = 32, number_epochs = 5, num_classes = 2, optimizer = (Adam). |
| Step 3: initialize sequential model () |
| Step 4: set embedding layer as input layer |
| Model = model.add(embedding(embedding_dimension, vocabulary_size, input_length)) |
| Step 5: add convolutional layer |
| Model = model.add(convolution 1D(number_filters, kernel_size)) |
| Step 6: add max pooling layer |
| Model = model.add(max_pool layer(pool_size, strides)) |
| Step 7: add LSTM layer |
| Model = model.add(LSTM_layer(lstm_units, activation_function, recurrent_activation, dropout_rate, return_sequences)) |
| Step 8: add dropout layer |
| Model-model.add(Dropout(dropout_rate)) |
| Step 9: add dense layer |
| Model = model.add(Dense_layer(num_classes, activation_function = “sigmoid”)) |
| Step 10: compilation |
| model.compile(e (loss_function, optimizer) |
| model.fit (y_train, y_train, number_epochs, batch_size) |