April 11, 2025

CISO Guide to Securing Employee Use of GenAI

Best Practices for Securing Public GenAI Apps and LLM Apps in the Enterprise

Public GenAI apps, like ChatGPTs and Copilots, are now widely used at work. It's not without its risks. The providers have created a maze of data protection promises, which mean that a wrong AI, subscription, login, setting or geo may mean that the end user prompt becomes the AI maker's to use. At the same time, NROC Security has seen that every 100 prompts contain 32 instances of PII.

Fundamentally, Gen AI apps represent a new breed of software where use cases are invented on the fly, user inputs are unpredictable, any data can be used, and there is no promise of the accuracy of the outputs. This creates several issues for traditional security architectures: lack of visibility, inability to to control user-driven data exposure, unable to monitor potentially inaccurate outputs, as well as gaps with identity and access management. The end users need guidance, not friction. The security team needs effective controls, and the compliance team needs evidence of policy compliance.

This guide, based on over 130 practitioner interviews, defines the issues and suggests best practices how CISO teams need to go beyond the traditional playbook. The guide concludes how well executed AI security can build trust in AI usage and provide insights for shaping the AI agenda. Security can be an accelerator for organizational learning and innovation.

Get insights on boosting GenAI app adoption safely

Subscribe to NROC security blog

Guardrails
Supported GenAI App

Tricks and treats - privacy and data protection terms of popular GenAI services

It is very positive that the big GenAI vendors, like OpenAI, Microsoft and others, have been granting more transparency into how they treat the content they collect from their end users. It’s only reasonable that you know where your prompt may end up. Will it be used to train the model, or will it even show up in an AI generated response to somebody else?

Guardrails
Productivity
Supported GenAI App
User behavior risks

NROC Security releases support for Grok

Grok3 by xAI was launched on 17 February 2025 and is now supported by NROC Security

Guardrails
Prompt risks
User behavior risks
Visibility
Supported GenAI App

NROC Security Becomes First GenAI Security Vendor to Support DeepSeek AI

NROC Security announces support DeepSeek AI as the first security vendor for GenAI Apps at work.

Prompt risks
Response risks
User behavior risks
Heatmap

Introducing a Risk Heat Map for employee GenAI SaaS usage

NROC Security introduces GenAI SaaS Heatmap for identifying risks of employee usage of GenAI in enterprises.

Safely allow more GenAI at work and drive continuous learning and change