May 13, 2024

Learnings from early adopters about effective AI task forces

6 learnings that characterize the more effective AI task forces

Many organizations have assembled a task force (working group or AI committee) to drive adoption of AI technologies across the organization. Typically, there is the right participation from business, IT, security, legal, maybe HR teams. There are still big differences in how effective the task forces are. From our discussions with tens of early adopter organizations we took the following 6 learnings that seem to characterize the more effective AI task forces.

Secure sponsorship -  find your Gen AI champion

Top management buy-in is as essential here as in any change project. Success of the task force requires both getting attention in the organization, and overcoming change resistance or overblown risk aversion. Effective Gen AI champions lead organizations where there’s already identified business potential from Gen AI, and have technical enough background to dispel any myths and misconceptions in the management team or in broader organizational communications.

Proactive rather than reactive - exploration instead of gatekeeping

Less effective task forces see themselves as gatekeepers: “Come to us with your ideas and we review and potentially approve them.” These task forces do not see a great inflow of ideas because the organization as a whole is not learning. More effective task forces go out proactively to scout for business use cases, organize sessions to open up thinking in the business functions or showcase potential tools. The goal is to accelerate organizational learning, and the Gen AI tools are of such a nature that learning-by-doing is really the only way.

Monitor and amplify - don’t only strategize use cases top down

It’s good to be aware of the typical high-potential use cases of Gen AI: customer service, SW modernization, marketing content chain to mention a few. But those are very broad areas, whereas many Gen AI applications are actually rather narrow in their use case. Effective gen AI task forces monitor the actual usage of Gen AI tools, reach out heavy users to understand the use cases, and spread the word about a promising use case to other organizations.

Trial and error is good - but not for security and compliance

Many organizations have been forced to scale back Gen AI usage because of a security incident. According to the overused cliche, users are the weakest link of any IT system. Here the business users are also the main beneficiaries of this technology, making the security enforcement a delicate balancing act. More effective Gen AI task forces define up front the acceptable risks and build the security and compliance framework in parallel with the rollout of tools for promising use cases. Ideally, there are basic safeguards and monitoring in place already at any pilot stage, so that compliance with acceptable use policies and patterns in data usage are established before a technology is rolled out to a broad user base.

Take every opportunity to educate the business users (and hold them accountable)

Most task forces have offered training to business users. Some made the training a prerequisite for getting access to the Gen AI tools. Most wrote the AI Acceptable Use Policy (AUP) as clearly and concisely as possible. The most advanced task forces do not stop here, but rather find ways to remind users along the way. A compelling intranet page that acts as a portal to the suggested tools and uptodate instructions is a good addition. Even better, if the governance of security tools in use allow pointing the users to the instruction material, whenever they are about to do something risky.

Follow the potential and readiness: make internal showcases shine

Different functions and business units vary in their readiness to jump on the journey. The quantified potential is very different between a 500-person customer service organization and a 20-strong marketing team, but the marketing team may get faster to something of concrete value. Half of the success is the realized benefits across the organization, but the other half is about raising the awareness and opening up exploration in new parts of the company. Sometimes pursuing the smaller use cases and showcasing them is a way to move the organization as a whole and develop Gen AI champions across business units and functions.

Driving the adoption of Gen AI technologies is a change management effort. It involves collective learning and exploration, that most effectively happen in an environment where people feel safe. Bringing the right security and compliance safeguards early on is vital in accelerating the change. NROC was founded to enable business users to safely take advantage of Generative AI technologies. We all are in the early innings of this, and look forward to innovating with our customers. More information about NROC Security please see our website at www.nrocsecurity.com

Get insights on boosting GenAI app adoption safely

Subscribe to NROC security blog

Visibility
Productivity
Guardrails

What to do now to prepare for EU AI Act coming to effect?

EU AI Act was approved 1st August 2024. It has transition period until August 2027, but for majority of companies, the transition time ends already 2nd August 2026 when most of the controls and tech needs to be applied to organisation.

Productivity
User behavior risks

When “BYOD” becomes “BYOCWATWD”

Minimize the dangers when employees “bring your own computer with AI to work - day

Productivity
Visibility

How to drive Gen AI adoption: Example AI Task Force charter

Best-practice task force charter that can help newly formed AI task forces to articulate their mission and focus their efforts

Guardrails
User behavior risks
Productivity
Visibility

Gen AI adoption journey: what matters at each stage and how to succeed

5 steps on a typical journey, and what matters varies step by step and distilled patterns common to the most successful of them

Safely leverage the advantages of GenAI apps for maximum productivity