Transparent Peer Review By Scholar9
Automated Evaluation of Speaker Performance Using Machine Learning: A Multi-Modal Approach to Analyzing Audio and Video Features
Abstract
In this paper, we propose a novel framework for evaluating the speaking quality of educators using machine learning techniques. Our approach integrates both audio and video data, leveraging key features such as facial expressions, gestures, speech pitch, volume, and pace to assess the overall effectiveness of a speaker. We collect and process data from a set of recorded teaching sessions, where we extract a variety of features using advanced tools such as Amazon Rekognition for video analysis and AWS S3 for speech-to-text conversion. The framework then utilizes a variety of machine learning models, including Logistic Regression, K-Nearest Neighbors, Naive Bayes, Decision Trees, and Support Vector Machines, to classify speakers as either "Good" or "Bad" based on predefined quality indicators. The classification is further refined through feature extraction, where key metrics such as eye contact, emotional states, speech patterns, and question engagement are quantified. After a thorough analysis of the dataset, we apply hyperparameter optimization and evaluate the models using ROC-AUC scores to determine the most accurate predictor of speaker quality. The results demonstrate that Random Forest and Support Vector Machines offer the highest classification accuracy, achieving an ROC-AUC score of 0.89. This research provides a comprehensive methodology for automated speaker evaluation, which could be utilized in various educational and training environments to improve speaker performance.