SENTIMENT ANALYSIS OF SOCIAL MEDIA TEXT USING TRANSFORMER-BASED LANGUAGE MODELS: A STUDY ON PUBLIC OPINION MINING AND ITS APPLICATIONS IN DECISION-MAKING
Keywords:
SENTIMENT ANALYSIS OF SOCIAL MEDIA, TRANSFORMER-BASED LANGUAGE MODELS, PUBLIC OPINION, MINING AND ITS APPLICATIONS, DECISION-MAKINGAbstract
This research targets the analysis of the efficiency of modern approaches based on transformer packing, namely BERT and RoBERTa for sentiment analysis of social media texts with the ultimate goal of identifying public opinion and using it for making decisions. Twitter provides highly informative and real-time feeds of people’s sentiments, yet the informal nature of the language, the use of sarcasm and presence of many abbreviations make it difficult to ascertain true positive and negative polarity. In this research, we utilize the contextualized embeddings of transformer-based models, BERT and RoBERTa against conventional machine learning algorithms, SVM and Logistic Regression, in Sentiment140 and SemEval 2017 Task 4A datasets. Preprocessing pipelines of the models were standardized and the performance was measured through accuracy, precision, recall, and F1-score. Hence, RoBERTa emerged as the most powerful model giving high accuracy with F1-score of about 91.86% of the sent140 and 88.96% of the Se-mEval and proved significantly better than the classical models in terms of robustness and generality. A p-value less than 0.001 consistently reaffirms the superiority of the transformer-based methods. This research also shows that the deep contextual models are effective in processing noisy and unstructured text and calls for their practical applications in areas like public health, governance, and brand management where insights from sentiment analysis can inform policy and business decisions in real-time.