Great promise of improved productivity with AI
Protecting data and IPR are important, but unleashing creativity of business users is also
Generative AI has been hyped a lot and many of us have personal stories about how it has made our daily work more efficient and sometimes more interesting. Now, the first business stories of the benefits are starting to be discussed at country clubs and even in headlines, like Spotify’s recommendation engine https://www.aimusicpreneur.com/ai-music-news/spotify-cuts-costs-to-double-down-on-ai-investments/ or Morgan Stanley’s AI assistant for wealth advisors https://www.cnbc.com/2024/06/26/morgan-stanley-openai-powered-assistant-for-wealth-advisors.html
How are we expecting AI to add value to business? We have seen most executives divide it up into two buckets: external and internal use cases. External use cases often aim at differentiating customer experience or creating other competitive advantages through e.g., dynamic pricing. The internal use cases tend to be mostly about AI-based productivity boosters to various ‘knowledge workers’ across business functions, like sales, marketing, finance and HR. It seems there’s an app available or being developed for every imaginable business use case, as one can see e.g from futurepedia.io with its 8000+ listed apps.
“If it was easy, everyone would do it” seems to apply also when trying to realize business value from Gen AI. One has to consider data management and compliance from a new perspective. Used data sets and or use case boundaries can no longer be assumed automatically because the Gen AI application is effectively a black box both to the users and the makers. By design LLM is not even deterministic. Those are big differences from any SaaS of the past.
New questions need to be raised related to responsible use of AI, which includes traceability, explainability and avoids misuse of AI. Can the response be tied back to training and grounding data via citations? That takes the average organization quickly to the deep end of the pool. And we are only scratching the surface here. A whole new world is where one has to start worrying about biases (be it politics or religion as an example), how to identify plain miss-use of AI in your organization (like systematic extraction of grounding data, or violations of acceptable use policies). And, then there is security and compliance.
The balancing act for security is to implement sufficient controls, while allowing freedom for business users to experiment and that way innovate the external and internal use cases. This is where a lot of different approaches are seen. Some start by installing plumbing before allowing access. Others try to get visibility to usage patterns that are today with no controls. And regardless of the approach, there is often the question from management or board of directors: “How is our organization going to benefit from Gen AI? What’s taking so long?”. Running over the security team is never a good idea, but neither is anybody waiting for IT and security to “figure it out”. In practice, allowing Gen Ai apps into the organization and building security & compliance capabilities end up happening in parallel.
All in all - protecting data and your IPR is important, but unleashing creativity of business users as well. NROC was founded to innovate governance and data protection for the business use of AI. We are always keen to share our learnings and innovate with practitioners. For more information please visit www.nrocsecurity.com