Shadow AI, the deployment of AI by employees or functional business units, is now a full reality and needs immediate focus from executive teams. Over three years ago, we published an article outlining actions organizations should take once employees become the main drivers of AI software purchases. Borrowing from previous technology cycles, we called this “Shadow AI." The deployment and usage of AI by employees and business units with involvement or knowledge of the IT organization, the risk function, or executive management. The meteoric rise of Generative AI usage by individual users across the globe makes the need to react to this not only by the IT organization but equally to leaders of all business functions. Our original piece on Shadow AI recommended that organizations focus on three things. First, how to manage an increased number of AI models across the organization. Secondly, to establish an AI governance model for the organization, and lastly, to focus on broad organization communication concerning AI so that accountability, leadership capabilities, and employee skills can improve. These three actions remain crucial, but more must be done now.
Three significant trends have driven the proliferation of software over the last 40 years. The deployment of PC software by individual users, the purchase of software by business units, and lastly deployment of software by consumers is often called the consumerization of IT. Each trend has in common that the software user, not the IT organization, determines when and how to deploy software. They are the background trends for Marc Andreessen’s statement, “Software is eating the world.” The three trends were all “shadow” implementation of software. We are now in the eye of the storm of a fourth trend, the deployment of AI by employees, consumers, and business partners. Predicting the detailed outcomes within an organization of shadow AI is not possible. But we can use history as a guide for how we react. Here are a few points:
Fighting shadow deployment by users and business functions is futile: As we learned from previous technology phases, fighting technology deployment by either the employee or the business unit is pointless. Instead, successful organizations focus on educating end-users about the total cost of AI technology deployments, including the technical debt they cause. Equally, the organization, led by HR and the IT organization, should focus on educating users and managers across that organization about opportunities with AI and the challenges that AI technologies pose. As we learned from previous "shadow technology implementations," banning the usage never works, especially for technologies where the price of implementation and use is low, and the value for the individual is high. Education and communication work far better in achieving results that benefit the individual and the organization simultaneously.
Updating corporate competency and skills development is critical: AI sometimes needs to be more precise in its output. AI also increasingly becomes a tool that will replace specific organizational skills. Both require the development of managers across the organization to understand how to lead and manage in an age of ubiquitous AI. The HR organization needs to take the lead in updating their corporate competency frameworks and ensure continuous development of both manager and employee skills to understand AI and how it can be used (and not used) in the organization.
Expanding your Risk Management scope to include “AI Everywhere”: Corporate Risk departments need to immediately ensure their risk assessments include assessment of AI models and algorithms deployed de-centrally outside the knowledge of the central IT organization. The possibility of bias in data and output of AI models or inappropriate use of the models increases risk across the organization. The level of understanding of how to evaluate the risk of AI in corporate risk departments remains limited but needs immediate attention. Furthermore, the increased level of legislation around AI and global and national competition means organizations need to upgrade their ability to track changes.
Developing leaders in the management of and management with AI: With individual employees using Ai to support or replace tasks that are part of their job, the manager's role changes considerably. Determining what AI tools and, more importantly, results are legitimate requires managers to understand AI in greater detail. This doesn’t mean managers have to dissuade employees from using AI tools but merely that they need to understand who uses them, for what purpose, and whether the output or outcome is valid. As AI tools proliferate within organizations, getting a sense of AI best practices, new ways of working involving AI, or how to address risks of AI becomes tasks managers need to understand how to lead an organization through. Just as with previous generations of shadow technology implementations, the ability of managers to be proactively and positively involved determines successful organizations from the rest.
Managing Algorithms: Allowing employees and business units to use AI solutions poses the challenge of ending up with many environments with algorithms for which the organization needs to know the usage, impact, and total cost (and risk) of ownership. Given that many AI environments are black boxes, such as the current generation of generative AI (e.g., Chat GPT), understanding both the data used and the output of AI solutions becomes essential. Equally, with rising legislation around AI, a central repository or model ops environment becomes critical. Until now, large organizations have considered this for centrally developed, purchased, and managed AI environments. However, with the deployment of AI solutions by employees and business unit managers, organizations may lose control again. Therefore, the IT organization (in partnership with the risk function and HR department, must build an approach for managing distributed algorithms. Once legislation is in place around AI, it will become the organization's responsibility to ensure compliance with all AI solutions deployed.
There will be other issues that will need to be addressed. The main thing to remember is that the cat is out of the bag, and Shadow AI is here to stay. Employees and first-level managers will deploy AI outside the corporate approval processes. Based on previous shadow technology deployments, fighting this deployment is futile. Ultimately it is the organization’s leadership team's responsibility to ensure that the appropriate governance is in place.
Comments