When “BYOD” becomes “BYOCWATWD”
Minimize the dangers when employees “bring your own computer with AI to work - day
There is an age old saying in IT, the biggest source of problems is often located between the seat and the keyboard. This is now more true than ever, as it was possible in the past to 1) remove accidental input data from a system or 2) identify in the data what can be put into the system in the first place.
When we add AI into the equation, including a lot of unstructured data from different places, the paradigm changes. The data is not labeled (even if the document is, the copy-paste snippet surely is not) and it is obtained from multiple locations in the corporate network. And if the network access controls are not up to date, the data might contain multiple problematic datasets or snippets of them.
As social media develops with sharing various aspects of our lives, the sharing becomes quite natural and people think less about information privacy. For example, how many of us are sharing sports activities in Strava (with location visible) or sending out images with EXIF data containing date, time and location.
The nature of GenAI apps, being transformational and data intensive, are normally unconstrained for data and use-cases and they have non deterministic behavior. And with the structure of vector-databases, there is no possibility to delete anything once it has been entered into the database.
Many find joy in working with AI, especially when personal productivity gets boosted by quickly getting answers with background information as well as the ability for a chat to get the answers more accurate.
This has a significant impact on corporations and their GenAI programs. For example, getting an answer to a question, “how many of our 5,000 customer contracts has this clause that we will invalidate?”, is easy. All you need to do is upload the whole contract base to a GenAI tool and you get your answer within a couple of minutes. To what extent does the individual think, what data should go to what AI app or is it even wise to put the data into the AI app to begin with.
Since sharing data comes naturally to us in our personal life, as mentioned earlier, and when there are no parameters set in the corporate environment, the same thought process applies to “everyone is doing it this way”. The worst part of this is company proprietary data that’s uploaded to a GenAI tool could be extremely detrimental. And even more of a concern is this action could be done with the users’ own account and personal computer because the employer has decided to limit access to GenAI apps due to the risks involved. There is even a term for this, “bring your own computer with AI to work - day”, a.k.a BYOCWATWD.
The GenAI use benefits are now becoming visible and measurable, but the users are not necessarily ready for it. In fact, a recent podcast refers to it as “it’s like giving a sports car but not the rules of the road”. This definitely opens up vulnerabilities in the corporate environment - and it is in between the seat and the keyboard.
All of this requires more involvement and attention from the employee and employer. A great place to start is to build a solid foundation. The initial items to address early on are:
- Regular training for employees
- Map out what data should go to which AI (use cases to GenAI apps)
- Allowing the use of approved GenAI apps for work purposes
But this is just the beginning. The key to getting the biggest productivity boost is figuring out how to enable the use of proprietary data in sanctioned GenAI apps (refer to the example above of uploading 5,000 customer contracts to a GenAI tool).
Want to keep up on the latest tips on how to enable business users to safely take advantage of Generative AI technologies? Subscribe to the NROC Security blog.