Abstract
With the convergence of artificial intelligence (AI), big data, and cybersecurity, how organisations protect sensitive information in digital environments that have become increasingly hostile has been fundamentally changed. Whereas Zero Trust Architecture (ZTA) can be seen as a paradigmatic shift in the face of classical perimeter-based security, its reliance on continuous verification and access control at the granular level poses new privacy issues, especially in cases of personal or organisational data processing. Conventional access-control systems have suffered re-identification vulnerabilities, insider risks and adversarial attacks which take advantage of data exposure. In order to address these concerns, this paper proposes a privacy-aware access-control framework that embraces data de-identification methods such as anonymisation, pseudonymisation, and differential privacy to an AI-enhanced Zero Trust. The proposed framework can help reduce privacy risk and prevent system utility loss by integrating de-identification at the data-collection layer, risk scoring by AI, and adaptive access controls enabled by ZTA. Combining theoretical discussion with the current developments in the field of AI-based privacy protection, the study creates a unified framework that enhances the ability to withstand re-identification, support adherence to changing regulations, and foster trust in AI-supported access control. Results indicate that de-identification can be safely co-existent with AI-based decision-making to strengthen ZTA, but trade-offs continue to balance privacy protection with the accuracy of AI models. The given study can help the field of cybersecurity research through its systematic method of operationalising privacy-by-design in the context of Zero Trust, which can serve as a basis to future studies on federated learning, blockchain incorporation, and quantum-resistance models of access.
View more >>