By Avivah Litan, Distinguished VP Analyst at Gartner
The use of artificial intelligence (AI) increases an organization’s threat vectors and broadens its attack surface. A Gartner survey found that 41% of surveyed organizations had experienced an AI privacy breach or security incident. Unfortunately, many organizations aren’t well-prepared to manage AI risks.
Risks that are not understood cannot be mitigated. A recent Gartner survey of chief information security officers (CISOs) reveals that most organizations have not considered the new security and business risks posed by AI or the new controls they must institute to mitigate those risks. AI demands new types of risk and security management measures and a framework for mitigation.
Gartner recommends that security and risk leaders focus on five key priorities to effectively manage AI risk and security within their organizations.
1. Capture the extent of AI exposure
Machine learning (ML) models are opaque to most users, and unlike normal software systems, their inner workings are often opaque to even the most skilled experts. Data scientists and model developers generally understand what their ML models are trying to do but they cannot always decipher the internal structure or the algorithmic means by which the models process data.
This lack of understandability severely limits an organization’s ability to manage AI risk. The first step in AI risk management is to inventory all AI models used in the organization, whether they are a component of third-party software, developed in-house or accessed via SaaS applications. This should include identifying interdependencies among various models. Then rank the models based on operational impact, with the idea that risk management controls can be applied over time based on the priorities identified.
Once inventoried, the next step is to make the AI models as explainable or interpretable as possible. “Explainability” means the ability to produce details, reasons or interpretations that clarify a model’s functioning for a specific audience. This will give risk and security managers context to manage and mitigate business, social, liability and security risks posed by model outcomes.
2. Drive awareness through an AI risk education campaign
Staff awareness is a critical component of AI risk management. First, get all participants including the CISO, the chief privacy officer, the chief data officer and the legal and compliance officers on board, and recalibrate their mindset on AI. They should understand that AI is not “like any other app” – it poses unique risks and requires specific controls to mitigate such risks. Then, go to the business stakeholders to expand awareness of the AI risks that you need to manage.
Together with these stakeholders, identify the best way to build AI knowledge across teams and over time. For example, see if you can add a course on fundamental AI concepts to the enterprise’s learning management system. Collaborate with application and data security counterparts to help foster AI knowledge among all organizational constituents.
3. Eliminate AI data exposure through a privacy program
According to a recent Gartner survey, privacy and security have been viewed as a primary barrier to AI implementations. Adopting data protection and privacy programs can effectively eliminate exposures of internal and shared AI data.
There are a range of approaches that can be used to access and share essential data while still meeting privacy and data protection requirements. Determine which data privacy technique, or combination of techniques, makes the most sense for the organization’s specific use cases. For example, investigate techniques such as data masking, synthetic data generation or differential privacy.
Address data privacy requirements when exporting or importing data to and from external organizations. Techniques such as fully homomorphic encryption (FHE) and secure multiparty computation (SMPC) should be more useful in these scenarios than for protecting data from internal users and data scientists.
4. Incorporate risk management into model operations
AI models need special-purpose processes as part of model operations (ModelOps) to make AI reliable and productive. AI models must be continuously monitored for business value leakage and unpredicted — sometimes adverse — outcomes, as environmental factors continuously change.
Effective monitoring requires AI model understanding. Specialized risk management processes must be an integral component of ModelOps to make AI more trustworthy, accurate, fair, and resilient to adversarial attacks or benign mistakes.
Controls should be applied continuously — for example, throughout model development, testing and deployment, and ongoing operations. Effective controls will detect malicious acts, benign mistakes, and unanticipated changes to AI data or models that result in unfairness, damage, inaccuracy, poor model performance and predictions, and other unintended consequences.
5. Adopt AI security measures against adversarial attacks
Detecting and stopping attacks on AI requires new techniques. Malicious attacks against AI can lead to significant organizational harm and loss, including financial, reputational, or related to intellectual property, sensitive customer data or proprietary data. Application leaders working with their security counterparts must add controls to their AI applications that detect anomalous data inputs, malicious attacks and benign input errors.
Implement a full set of conventional enterprise security controls around AI models and data, as well as new AI-specific integrity measures, such as training models to tolerate adversarial AI. Finally, prevent AI data poisoning or input error detection using fraud and anomaly detection and bot detection techniques.
AI security measures will be further discussed at the Gartner Security & Risk Management Summit 2023, February 13-14 in Mumbai, India