Transparent Peer Review By Scholar9
A Review on Machine Learning Based Models for Hate Speech Detection on Social Media Platforms
Abstract
In this paper, we present a state of the art review on machine learning and deep learning models for hate speech detection. Hate speech, abusive words, threat, derogation are some examples of such incidents. Abuse in the form of hate speech is not only applicable to one gender, it is applicable to everyone. In the current scenario understanding the dynamics patterns (incidents, geographical prevalence, demographics, etc.) is crucial in designing strategies to analyze the hate speech activities. Social media platforms are acting as an information-based system that collects and organizes hate speech related information from various sources (namely users). This collected information is analyzed to extract knowledgeable patterns from huge amount of social media data which is not possible to monitor in every minute. Contextual dependency among various lexicons in data will be necessary to detect hate speech. In the existing studies, very fewer studies are available which works on hate speech detection in terms of users behavior. As a result, in this study, we are treating hate speech as an online exponential problem with the intention of harming human beings who are the target. Such events promote social inequities and asymmetries by making online places inhospitable and inaccessible.
Sivaprasad Nadukuru Reviewer
04 Oct 2024 02:38 PM
Approved
Relevance and Originality
The paper addresses a highly relevant issue in today's digital society: the prevalence of hate speech on social media platforms. With the increasing reliance on these platforms for communication, the need for effective detection methods has never been more pressing. The originality of the paper lies in its comprehensive review of machine learning and deep learning models specifically tailored for hate speech detection, emphasizing the need to understand user behavior and the contextual nuances of language.
Methodology
While the paper outlines the importance of analyzing patterns and contextual dependencies in hate speech detection, it lacks a detailed methodology section. It would benefit from specifying the types of machine learning and deep learning models reviewed, as well as the criteria for selecting these models. Additionally, discussing how data was collected and analyzed would enhance the methodological rigor. Providing a framework for how various models compare in terms of performance and effectiveness would also strengthen this section.
Validity & Reliability
The paper’s assertions about the growing problem of hate speech are valid, supported by current literature and examples. However, to improve reliability, the authors should incorporate empirical data or case studies that illustrate the effectiveness of specific models in real-world scenarios. This would lend credibility to their review and help substantiate their claims about the inadequacy of existing studies focusing on user behavior.
Clarity and Structure
The paper is generally well-structured, with a logical flow from the introduction of hate speech issues to the discussion of detection models. However, to enhance clarity, the use of headings and subheadings would help to better organize the content. Including bullet points or tables summarizing key findings from the reviewed models could also aid in comprehension. Furthermore, clearer definitions of technical terms and concepts would make the paper more accessible to a broader audience.
Result Analysis
The analysis of existing models is a crucial component that the paper should expand upon. While it mentions the need for contextual understanding, it would benefit from a more thorough examination of how different models perform against various metrics (e.g., precision, recall, F1-score). Additionally, discussing the implications of these findings for future research and practical applications in moderation on social media platforms would provide a more holistic view of the issue. Concluding with potential directions for future research in hate speech detection, especially focusing on user behavior, would also enhance the paper's contribution to the field.
IJ Publication Publisher
Thank You Sir
Sivaprasad Nadukuru Reviewer