Asudani, Deepak Suresh; Nagwani, Naresh Kumar; Singh, … - In: Data Technologies and Applications 56 (2022) 4, pp. 483-505
Representations Transformers (BERT) pre-trained word embedding are used to identify relationships between words, which helps to … experiments compares the deep learning model performance without embedding, GloVe and BERT embedding. The experiments show that … experiment reveals that the CNN model with GloVe embedding gives slightly better accuracy than the model with BERT embedding and …