Governance and security framework for Gen AI apps
Security framework required for safe adoption of Gen AI applications
Generative AI applications introduce new cybersecurity risks. Traditional cybersecurity solutions poorly address the specific Gen AI risks, like conversational user experiences, in which the Gen AI application can perform a wide variety of tasks and pose no limits to what type of information the user can enter. Many organizations have issued policies and training to end users, but the lack of technical security controls is holding back many rollouts of Generative AI technologies.
Based on over 50 interviews, NROC Security has conceptualized a security framework required for safe adoption of Gen AI applications. It consists of 4 discreet policy items that govern access, data, use case and the responsible use of AI. The below table describes the framework, sets the essential policy control (WHAT) and the implementation challenges (HOW)
Like in most cybersecurity responses to new technologies entering the enterprise, the framework is a mix of old and new, and finding the most efficient way to implement is a key to meeting the business requirements. Active directory groups and attributes should be the basis for access, but context is specific to Gen AI apps. Conversational user experiences challenge data security to match a large copy/pasting or ingestion of unstructured information where keywords alone might trigger many alerts.
Use case anti-drift and responsible AI are totally new policy items. Yesterday’s SaaS did not need controls concerning what it is used for. The application was made for a certain use case and pretty rigidly stored and processed information to deliver its value. For example, regulating who is allowed to use what tool for creating software code is a new requirement. The same way, not all apps are qualified to give medical advice, yet they offer some when asked.
Responsible AI is a fast-evolving space. Keeping track of what content was created by AI is already good ‘bedside manners’, increasingly a compliance requirement, and some occasions mandate a disclosure to a customer or employee of the AI’s involvement. The same goes with explainability, i.e., knowing who used what tool with what data to create a piece of content.
The makers of the apps and the providers of the underlying models will each do their part, but governance and security of the usage, data and results is still the responsibility of the enterprise. NROC was founded to enable enterprises govern and secure adoption of generative AI technologies. We all are in the early innings of this, and look forward to innovating with our customers. More information about NROC Security please see our website at www.nrocsecurity.com