By Tony Buffomante, Global Head of Cybersecurity and Risk Services at Wipro Limited
Continuous disruption in technology has become the hallmark of the age that we live in. As a result, the remits of a Chief Information Security Officer (CISO) and security practitioners are transforming significantly and rapidly. The best response to continuous disruption is continuous innovation.
The amplified use of advanced technologies and tactics for ransomware and phishing has led to increasingly sophisticated cyber threats. Added to this, rapid changes in the economy are putting organizations under pressure to optimize their cybersecurity efforts with tight budgets. Findings from the recent State of Cybersecurity Report 2023 (SOCR 2023) reveal that over two-thirds of organizations allocate less than 10% of their IT budget to security spends.
While arguably, many enterprises are approaching the culmination of their digital transformation journey by migrating some workloads to public or private cloud platforms, it does not signal downtime for security leaders. This is largely due to the proliferation of Artificial Intelligence (AI) across businesses. Unlike Cloud adoption, AI presents challenges of a vast magnitude, impacting and redefining the fabric of security and risk management.
In the pursuit of efficient and scalable growth, organizations are rapidly embracing Generative AI tools and AI is being embedded at multiple touchpoints to result in enriched customer experiences, increased operational efficiencies and intelligent software. In its nascent stages, AI had already demonstrated its ability to automate routine tasks and unearth patterns to form correlations. As AI and Machine Learning (ML) transition from being nascent technologies to mainstream powerhouses for supporting technologies, their influence on the risk and compliance ecosystem within organizations, will be nothing short of revolutionary.
Risks associated with AI
This emerging scenario brings with it unprecedented risks. Hacking has become a well-funded, multi-billion-dollar industry, where malicious entities utilize the same advanced technology tools as their targets. This has led to a surge in the complexity and frequency of cyberattacks, sometimes shortening attack life cycles to a matter of hours.
Many breaches often exploit basic lapses in cyber hygiene which, despite the advancements achieved in technologies, continues to be less than optimal. This means that attackers don’t necessarily have to be very sophisticated in their approach. However, as bad actors get access to AI and ML, the sophistication of their attacks is bound to increase.
Enterprises are now compelled to embrace AI/ML systems to bolster their defences while propelling growth in parallel. These technologies learn, adapt, and react swiftly to identify novel attack vectors and accelerate defence reactions. As more layers of detection are added to bolster defences, organizations are better equipped to address ingenious breaches.
The journey to harnessing the advantages of AI while maintaining robust cybersecurity is a complicated one. The SOCR 2023 underscores that organizations focus on security orchestration and automation, with 79% prioritizing it. However, the rapid evolution of Generative AI and large language models, and their capabilities can sometimes present a temptation to keep risk management in the background. The role of a CISO in managing risk, security and compliance when using Generative AI is therefore a formidable one.
When down to brass tacks, AI’s effectiveness hinges on the people creating and steering it. While the risks surrounding AI are abundant, it is counter-productive to halt its use until every conceivable vulnerability is identified and addressed. Achieving secure AI deployment demands collaboration: the right people to develop code, test it rigorously, and ensure ongoing oversight and monitoring. Cost-effective AI governance entails a risk-based framework, that involves continuous monitoring and takes pre-emptive measures to prevent security vulnerabilities and data leaks.
Need for a robust framework
Organizations are required to establish a comprehensive framework that encompasses regulations and controls governing the implementation of AI technology. This includes defining the types of prompts that can be utilized with AI models and also those that are prohibited, along with devising strategies for effective usage of the outcomes generated. Trustworthy AI is safe, secure, resilient, accountable and transparent, explainable and interpretable, fair, valid and privacy-enhanced.
A resilient framework to securely deploy AI ideally comprises of four core functions:
· Govern: Foster a culture of risk management.
· Map: Recognize context and identify related risks.
· Measure: Assess, analyze, and track identified risks.
· Manage: Prioritize and address risks based on project impact.
The approach to governing Artificial Intelligence (AI) and Machine Learning (ML) while considering cybersecurity and privacy isn’t a one-size-fits-all solution. Below are seven suggested steps that organizations can adopt to enhance their digital resilience with AI-driven technologies:
· Define purpose and clear user directives
· Clarify code ownership
· Establish intellectual property rights
· Address security policies/ strategies and confidentiality guardrails
· Prioritize identity security
· Enhance security offerings
· Ensure regulatory and legal compliance
These guidelines enable security executives to conduct well-informed conversations with invested parties seeking to deploy AI systems. Once a governance procedure is agreed upon and implemented, system categorization becomes feasible, and potential risks should then be formally recorded. This lays the groundwork for integrating cybersecurity measures and protective protocols directly into the AI system and data model, establishing a foundational infrastructure. It is a step-by-step progression approach necessitated by the expanding attack surface brought about by the proliferation of AI systems.
The cybersecurity industry stands at a pivotal intersection. Over the past decade and more, the industry has been grappling with a surge in the frequency and consequences of cyberattacks. Enterprises are now gaining a deeper comprehension of cyber risks, driven by more comprehensive evaluations and increased funding is required for implementing controls that are predominantly focused on identification and prevention.