Harnessing Large Language Models: Fine-Tuned BERT for Detecting Charismatic Leadership Tactics in Natural Language
This work investigates the identification of Charis-matic Leadership Tactics (CLTs) in natural language using a fine-tuned Bidirectional Encoder Representations from Transformers (BERT) model. Based on an own extensive corpus of CLTs generated and curated for this task, our methodology entails train...
Saved in:
Published in | 2024 IEEE 3rd Conference on Information Technology and Data Science (CITDS) pp. 1 - 6 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
26.08.2024
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/CITDS62610.2024.10791373 |
Cover
Summary: | This work investigates the identification of Charis-matic Leadership Tactics (CLTs) in natural language using a fine-tuned Bidirectional Encoder Representations from Transformers (BERT) model. Based on an own extensive corpus of CLTs generated and curated for this task, our methodology entails training a machine learning model that is capable of accurately identifying the presence of these tactics in natural language. A performance evaluation is conducted to assess the effectiveness of our model in detecting CLTs. We find that the total accuracy over the detection of all CLTs is 98.96% The results of this study have significant implications for research in psychology and management, offering potential methods to simplify the currently elaborate assessment of charisma in texts. |
---|---|
DOI: | 10.1109/CITDS62610.2024.10791373 |