Uploaded by cimconinfo

Shadow AI What is it and How to Manage the Risk from it

advertisement
About Us
CIMCON Software, LLC, established in 1988, leads the field in
end-user computing (EUC) risk management, serving over 500
companies globally across diverse industries. Our software
solutions are comprehensive, well-tested, and feature-rich,
providing unmatched assurance of success. With
headquarters in Boston and offices in Europe and Asia, we
offer strong global support for EUC system implementation.
www.cimcon.com
Shadow AI: What is it and How to Manage the Risk from it?
Shadow AI refers to the use of AI applications or models within an organization without the IT
department's knowledge or approval. It poses risks in two key areas: internally, where
employees use AI tools like GenAI to write code or build applications without IT oversight, and
externally, where AI is embedded in third-party software or updates without the firm's
awareness. Managing the risks of Shadow AI is becoming increasingly important as AI adoption
grows across industries, requiring organizations to identify and mitigate its presence.
Why is it important?
AI adoption in financial services grew 2.5x from 2017 to 2022, with rising risks due to
unpredictable outputs. Costs scale fast—OpenAI’s GPT model costs $1M daily, and
GPT-4’s parameters jumped from 1 billion to 100 billion. Generative AI has higherthan-expected error rates. According to The Economist, 77% of bankers see AI as
crucial, though avoiding its use is difficult due to shadow AI. As AI advances, its
complexity and risks will only increase.
Mitigating the Risk from Shadow AI
1. Identifying the internal use of GenAI: EUCs and Models can be generated using GenAI
that can then leak into the public sphere or hallucinate and produce errors and so testing
specific Models and EUCs to see what the probability of GenAI use is can be helpful.
2. Identifying AI Models within 3rd Party Applications: Monitoring the behavior of 3rd
party tools and executables and looking for patterns that may be indicative of the use of AI
can be a necessary way to identify hidden risk of shadow AI. Consistent scheduled scans to
identify and look for this risk can be a great way to mitigate this risk.
3. Interdependency Map: A model’s level of risk is highly dependent on the models and data
sources that serve as inputs to that model. With an interdependency map, you can easily visualize
these relationships and interdependencies. Paying special attention to 3rd Party Models that feed
into high impact models can help prioritize where to look for shadow AI.
4. Security Vulnerabilities: Even if firms are aware of the use of AI within a 3rd party,
it can be important to automate checks for security vulnerabilities within AI 3rd party
libraries.
5. Monitor 3rd Party Model Performance: Many of these 3rd party models are black boxes and
here the risk of shadow AI is highest as firms do not know what techniques a 3rd party vendor is
using. Monitoring 3rd party models for sudden changes in performance can be an indicator for the
use of shadow AI.
6. AI Testing Validation Suite: Have a comprehensive testing suite for models that can
similarly pick up strange behavior that can indicate the use of shadow AI. An effective testing
suite to control for this could include: Data Drift, Validity & Reliability, Fairness,
Interpretability, Code Quality among many others. The results of these tests should be
consistently documented in a standardized and easy to follow way.
7. Proper Controls, Workflows, and Accountability: Helping control the use of shadow AI on
internally developed tools can be a function of controlling who has access to what EUCs and
Models. This can be done through an Audit Trail which also tracks who makes changes to
what models as well as through Approval Workflows which can provide accountability for
who approved models that were behaving suspiciously.
Effective Management of Shadow AI
Shadow AI poses a significant challenge for firms and organizations, and this issue is
likely to worsen as AI continues to proliferate. The main risk with Shadow AI is that its
presence often goes unnoticed until effective identification and mitigation tools are in
place. Managing Shadow AI is crucial not only due to regulatory pressures but also
because it increases the risk of costly errors. Utilizing well-established tools and a team
with over 25 years of experience is the most effective strategy to address this
challenge proactively and resolve issues before they escalate.
AI Risk Management Framework
Explore our AI Risk Management Policy to understand the diverse landscape of Artificial
Intelligence (AI), including supervised, unsupervised, and deep learning models. This concise
guide emphasizes building trustworthy AI aligned with the NIST AI Risk Management
Framework. Learn to evaluate and manage AI risk, foster a culture of risk awareness, and
implement regular testing with tools like ours, making this policy your essential toolkit for
responsible and effective AI utilization in your organization.
Contact Us
Boston (Corporate Office)
+1 (978) 692-9868
234 Littleton Road
Westford, MA 01886, USA
New York
+1 (978) 496 7230
394 Broadway
New York, NY 10013
Download