In this study, we conduct a large-scale study of hate speech detection, employing five established hate speech datasets. We discover that LLMs not only match but often surpass the performance of current benchmark machine learning models in identifying hate speech. By proposing four diverse prompting strategies that optimize the use of LLMs in detecting hate speech. Our study reveals that a meticulously crafted reasoning prompt can effectively capture the context of hate speech by fully utilizing the knowledge base in LLMs, significantly outperforming existing techniques. Furthermore, although LLMs can provide a rich knowledge base for the contextual detection of hate speech, suitable prompting strategies play a crucial role in effectively leveraging this knowledge base for efficient detection.https://paperswithcode.com/paper/an-investigation-of-large-language-models-forShare this:FacebookXLike this:Like Loading... Post navigation Hate Speech Detection in Social Networks using Machine Learning and Deep Learning Methods (The Science and Information Organization) An Experiment in Keyword-based Approach for Hate Speech Detection in Online Social Media Comments (SSRN)