Human-in-the-Loop Automated Software Testing Enhancing Coverage and Reducing False Positives Through Interactive Machine Learning
Abstract
Software testing is a critical component of software development, yet traditional automation often struggles with low coverage and high false positives. This paper introduces a Human-in-the-Loop (HITL) automated software testing framework that leverages interactive machine learning (IML) to enhance test coverage and reduce false positives. By incorporating domain experts into the machine learning loop, the system dynamically refines test case generation and bug classification in real time. We propose a hybrid architecture integrating human feedback loops with test synthesis and prioritization modules. Experimental validation on multiple open-source projects demonstrates improved precision and recall, especially in anomaly detection and regression testing. This work advances the field of intelligent software testing by aligning human insight with algorithmic rigor, enabling more reliable and scalable QA workflows.