June 10, 2024

Governance and security framework for Gen AI apps

Security framework required for safe adoption of Gen AI applications

Generative AI applications introduce new cybersecurity risks. Traditional cybersecurity solutions poorly address the specific Gen AI risks, like conversational user experiences, in which the Gen AI application can perform a wide variety of tasks and pose no limits to what type of information the user can enter. Many organizations have issued policies and training to end users, but the lack of technical security controls is holding back many rollouts of Generative AI technologies.

Based on over 50 interviews, NROC Security has conceptualized a security framework required for safe adoption of Gen AI applications. It consists of 4 discreet policy items that govern access, data, use case and the responsible use of AI. The below table describes the framework, sets the essential policy control (WHAT) and the implementation challenges (HOW)

AI policy element

Controls

Implementation questions

Access

Which users are allowed to access Gen AI apps?

How to efficiently administer user access rights?

What contextual controls need to be satisfied to access a Gen AI app (e.g., geolocation, use of corporate ID)?

How to reliably assess the user context and quickly allow/deny an attempted access?

Which Gen AI apps are allowed to be accessed?

How to rate the risk of various Gen AI apps?

Data Security

What information should not flow into Gen AI apps?

How to recognize personally identifiable information and other discreet secrets?

How to recognize sensitive corporate information reliably and cost efficiently? 

Use case ‘anti-drift’

What content is allowed/disallowed to be created with Gen AI apps?

How to recognize content types (e.g., text, multimedia)?

How to recognize the category of the generated content (e.g., legal, financial, medical, SW)? 

Responsible AI

What Gen AI app and input data was used to create a specific piece of content?

How to log and tag outputs and provide a great user experience to retrieve explanations?

What created content is harmful and needs to be removed?

How to recognize harmful content (hate, violence)?

Like in most cybersecurity responses to new technologies entering the enterprise, the framework is a mix of old and new, and finding the most efficient way to implement is a key to meeting the business requirements. Active directory groups and attributes should be the basis for access, but context is specific to Gen AI apps. Conversational user experiences challenge data security to match a large copy/pasting or ingestion of unstructured information where keywords alone might trigger many alerts.

Use case anti-drift and responsible AI are totally new policy items. Yesterday’s SaaS did not need controls concerning what it is used for. The application was made for a certain use case and pretty rigidly stored and processed information to deliver its value. For example, regulating who is allowed to use what tool for creating software code is a new requirement. The same way, not all apps are qualified to give medical advice, yet they offer some when asked.

Responsible AI is a fast-evolving space. Keeping track of what content was created by AI is already good ‘bedside manners’, increasingly a compliance requirement, and some occasions mandate a disclosure to a customer or employee of the AI’s involvement. The same goes with explainability, i.e., knowing who used what tool with what data to create a piece of content.

The makers of the apps and the providers of the underlying models will each do their part, but governance and security of the usage, data and results is still the responsibility of the enterprise. NROC was founded to enable enterprises govern and secure adoption of generative AI technologies. We all are in the early innings of this, and look forward to innovating with our customers. More information about NROC Security please see our website at www.nrocsecurity.com

Get insights on boosting GenAI app adoption safely

Subscribe to NROC security blog

Visibility
Productivity
Guardrails

What to do now to prepare for EU AI Act coming to effect?

EU AI Act was approved 1st August 2024. It has transition period until August 2027, but for majority of companies, the transition time ends already 2nd August 2026 when most of the controls and tech needs to be applied to organisation.

Productivity
User behavior risks

When “BYOD” becomes “BYOCWATWD”

Minimize the dangers when employees “bring your own computer with AI to work - day

User behavior risks
Productivity
Visibility

Learnings from early adopters about effective AI task forces

6 learnings that characterize the more effective AI task forces

Productivity
Visibility

How to drive Gen AI adoption: Example AI Task Force charter

Best-practice task force charter that can help newly formed AI task forces to articulate their mission and focus their efforts

Safely leverage the advantages of GenAI apps for maximum productivity