Harnessing Large Language Models: Fine-Tuned BERT for Detecting Charismatic Leadership Tactics in Natural Language

This work investigates the identification of Charis-matic Leadership Tactics (CLTs) in natural language using a fine-tuned Bidirectional Encoder Representations from Transformers (BERT) model. Based on an own extensive corpus of CLTs generated and curated for this task, our methodology entails train...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE 3rd Conference on Information Technology and Data Science (CITDS) pp. 1 - 6
Main Authors Saeid, Yasser, Neuburger, Felix, Krugl, Stefanie, Huster, Helena, Kopinski, Thomas, Lanwehr, Ralf
Format Conference Proceeding
LanguageEnglish
Published IEEE 26.08.2024
Subjects
Online AccessGet full text
DOI10.1109/CITDS62610.2024.10791373

Cover

More Information
Summary:This work investigates the identification of Charis-matic Leadership Tactics (CLTs) in natural language using a fine-tuned Bidirectional Encoder Representations from Transformers (BERT) model. Based on an own extensive corpus of CLTs generated and curated for this task, our methodology entails training a machine learning model that is capable of accurately identifying the presence of these tactics in natural language. A performance evaluation is conducted to assess the effectiveness of our model in detecting CLTs. We find that the total accuracy over the detection of all CLTs is 98.96% The results of this study have significant implications for research in psychology and management, offering potential methods to simplify the currently elaborate assessment of charisma in texts.
DOI:10.1109/CITDS62610.2024.10791373