Abstract
The moral, ethical, and legal protections of artificial intelligence (AI) have come under scrutiny due to recent developments in the field. A more ethical approach to managing AI technology is urgently required, as is the development of better measures for evaluating AI system privacy and security. To tackle these issues, we suggest a model for AI maturity and a framework for AI trust to improve confidence in AI system design and administration. For AI to be trusted, people and machines must first reach a mutual understanding on the system's performance. Improved openness and confidence in unregulated "black box" AI systems are goals of the framework's "entropy lens" research, which is based on information theory. In highly competitive and unpredictable settings, human trust in AI systems can be diminished due to their high entropy. This study uses insights from entropy research to enhance the reliability and efficiency of autonomous human-machine teams and systems, particularly those including hierarchical components and their interconnections. Using this perspective to boost faith in AI also reveals untapped potential for team efficiency. We provide two examples to show that the AI framework can accurately gauge confidence in AI system design and administration. For its outstanding capacity to produce realistic data, Generative Artificial Intelligence (GAI) has set off a revolutionary wave in many fields, such as machine learning, healthcare, commerce, and the entertainment industry. An exhaustive analysis of the privacy and security issues related to GAI is provided by this survey. It offers five crucial viewpoints that are necessary for a thorough comprehension of these complexities. Various generative model types, GAI designs, practical applications, and current advances in the field are covered in the study. It also notes existing security methods and suggests long-term fixes with an emphasis on participation from users, developers, institutions, and lawmakers.
View more >>