With the accessibility of generative AI to employees, we are seeing the rise of “Shadow AI,” which poses similar challenges for CIOs, says Jay Upchurch, Executive Vice President & Chief Information Officer, SAS while elaborating on the specific business risks associated with the rise of Shadow AI within organizations
Some edited excerpts:
Can you elaborate on the specific business risks associated with the rise of Shadow AI within organizations? How is SAS helping businesses mitigate these risks? And what complications does shadow AI bring to the table for information leaders?
For years, CIOs have dealt with “Shadow IT” – technology solutions that come into an organization outside of the guardrails set by IT. Once these rogue solutions are in place inside an organization, they typically capture ITs attention for one of two reasons: 1) Because they’re successful and might be valuable throughout the organization; or 2) Because they pose a security risk to the organization and its customers.
With the accessibility of generative AI to employees, what we’re seeing now is the rise of “Shadow AI,” which poses similar challenges for CIOs. Each CIO and organization will need to approach Shadow AI differently. Some come in heavy handed and insist on policing it, pulling it, shutting it down. Unless your business is heavily regulated or you deal with customers whose data and information is sensitive, I generally don’t recommend that approach.
Instead, I advocate working with your organization’s division leaders to better understand their business strategy and how AI and generative AI can boost productivity to help them accomplish their goals faster and more efficiently. Show them why it’s to their advantage to move their AI and generative AI responsibilities to you and the IT team. Doing this ensures that CIOs don’t inadvertently stifle the curiosity of their well-meaning employees. It also helps demonstrate a commitment to being a partner, not a barrier, to new ways of doing things to enhance productivity.
As AI capabilities grow, how is SAS addressing the ethical considerations and potential biases arising from Shadow AI use? What role does Explainable AI play in this context?
AI isn’t new. SAS has been working with neural networks, computer vision and other forms of AI for years. In fact, SAS is behind many of the advances that transformed the way the world uses data today.
SAS takes a holistic view of AI that extends from executive oversight to cultural integration and market competitiveness. We believe we have a moral imperative to do good for those we serve with our AI software and solutions.
Because SAS is a responsible technology innovator and seeks to build trust in technology, we created an entire practice devoted to data ethics and AI transparency. Led by Reggie Townsend – who serves on the U.S. National AI Advisory Committee – one of their primary missions is to help SAS and our customers develop foundational levels of data and AI literacy. We believe that AI should not be “done to us,” we believe AI should be “done for us.” To make that happen, we first need to be better educated. Employees who are more literate about AI risks may be less likely to use AI outside the scope of their IT organization.
Trustworthy AI begins before a software developer writes the first line of code. As a society, we should aim for AI standards that are simple but robust. The standards should encompass capabilities like bias detection, explainability, decision auditability and model monitoring.
As AI models may decay over time, our standards must account for the data used to train the models, the processes and people creating the models, and the model’s intended audience and use. Think of it like nutrition labels on food. We should have similarly understandable labels for the AI we use.
While Shadow AI poses challenges, can it also offer unexpected benefits? How can organizations integrate responsible Shadow AI initiatives into their broader technology strategy?
The hype around ChatGPT’s generative AI power in late 2022 ignited attention from individuals and organizations. Much like the introduction of the iPhone, ChatGPT represents one of the rare times emerging technology was targeted to consumers rather than corporations.
It’s natural for individual employees to be curious about the AI space, so understanding the technology and learning how the organization wants to “play” with it internally and externally is pivotal.
Because I believe there’s a strong connection between curiosity and innovation, I advise our customers to put the proper security guardrails in place and encourage the use of AI and generative AI to kickstart organizational innovation.
Based on your interactions with various clients, what trends are you observing across different industries regarding Shadow AI adoption and its impact on their IT landscape?
Shadow AI lives and breathes no matter the industry or the size of the organization. It’s hiding in dark corners, and you need to be aware of it. If you’re not, you’re in for a big surprise one day.
If you must hunt down where Shadow AI already exists in your organization, that’s a sign you’re not being proactive as a trusted partner. As an IT leader, you need to show up with answers. And if the divisions inside your organization aren’t involving you, ask yourself: Am I an impediment to AI progress?