| Input: x = A set of sentences, S = {S1, S2, …, Snd} with n-words, {w1, w2, …, wnd}, of dataset | | Output: L ϵ {Sad:0 Hate:1 Anger:2 Neutral:3 happy:4} | (1) | Assumption: Each document is a English language dataset. | (2) | for each instance Si in S do | (3) | Level I: Pre-Processing | (4) | 1. Tokenizing the word vector | (5) | wt = f(Si), Si = {w1, w2, ..., wng}ϵS | (6) | Output: wt = {‘w’1,’ w’2 , ..., ‘wn’} | (7) | 2. Pad 0 to fix dimensions | (8) | wp = g(wt), wt = {‘w’1,’ w’2 , ..., ‘wn’} | (9) | Output: count if words in the longest sentence = n + k | (10) | wp = {‘w’1,’ w’2, …, ‘wn’, 0, 0, 0, …ktimes} | (11) | Level II: CNN-BiLSTM Model | (12) | if Si belongs to training data then | (13) | initialize the data: my_model.fit (L, L_tr) | (14) | else | (15) | initialize the data: my_model.predict (L, L_ts) |
|