Privacy-Preserving Gen AI in Multi-Tenant Cloud Environments
Main Article Content
Abstract
The use of Generative AI (GenAI) in Multi Tenant Cloud has resulted in a data privacy and security concern. The problem that presents itself, particularly amid the growing movement by cloud organizations to host and deploy AI models within the cloud, is that user data privacy needs to be preserved. This paper looks at how privacy preserving techniques can be embedded inside GenAI systems in a multi-tenant cloud setting. In particular, it studies the various cryptographic techniques, data anonymization techniques, and access control frameworks that can be employed to make sensitive data information for use in AI models but continuously maintain their performance and scale. It also discusses trade-offs that can be made between privacy preservation and system efficiency, under the assumptions of the multi-tenant environments. Privacy concerns such as data leakage, adversarial attacks, and model inversion are studied and state of the art solutions for improving their privacy risks are provided. Finally, this paper analyzes regulatory frameworks and ethical implications of the GenAI systems and makes recommendations for best practices for decision making in privacy preserving GenAI systems. The discussion highlights the need to find the right balance between automated innovation of AI in the cloud and good privacy that enable trust in cloud AI.