Abstract
In high-stakes domains such as healthcare, law, and finance, the need for interpretable artificial intelligence (AI) systems has become increasingly critical. Neural-symbolic integration, combining the learning capabilities of neural networks with the reasoning strengths of symbolic systems, has emerged as a promising approach to address the interpretability challenge. This paper provides a comparative analysis of neural-symbolic integration techniques available as of, evaluating their effectiveness in enhancing transparency and trust in decision-making processes. Key methods, historical developments, and empirical performances are reviewed. Findings suggest that while significant progress has been made, further refinement is necessary to fully operationalize neural-symbolic methods for deployment in critical applications.
View more >>