In an era of rapidly evolving cybersecurity threats, Generative AI is emerging as a game-changing tool in the hands of security leaders. However, while its capabilities in threat detection, automation, and advanced analytics make it an attractive option, many organizations are grappling with its broader implications. In this interaction, Robert Pizzari, Group Vice President of Strategic Advisory, Asia Pacific at Splunk, shares insights from Splunk’s latest global report, “State of Security 2024: The Race to Harness AI.” He explores the paradox of high GenAI adoption amidst a lack of clear policies, the challenges security leaders face in understanding AI’s complexities, and how this technology is shaping the cybersecurity talent landscape
Some edited excerpts:
What are the driving factors for high adoption of Generative AI in cybersecurity operations despite many organisations lacking a clear Gen AI policy?
It’s no surprise that organisations are keen to adopt Generative AI, given its impressive capabilities in enhancing threat detection, automating routine tasks, and delivering advanced analyses that far exceed traditional methods. As GenAI becomes more mainstream, businesses are excited to tap into its potential to revolutionise cybersecurity operations – whether it’s identifying emerging threats, streamlining incident responses, or improving overall security posture.
In our recent global report, “State of Security 2024: The Race to Harness AI,” we found that a significant number of organisations have already incorporated GenAI tools into their cybersecurity frameworks. Those with strong cybersecurity programs and substantial budgets are particularly well-positioned to make the most of these advanced technologies. However, many organisations still lack a clear GenAI policy and may not fully understand the broader implications of this technology.
Our findings are quite revealing: a remarkable 93% of security leaders reported using public GenAI within their organisations, with 91% applying it specifically to cybersecurity operations. Yet, paradoxically, 34% of these organisations have not set up any formal GenAI policy, and 65% of respondents admit to not fully grasping the implications of this technology. This gap presents both a challenge as well as an opportunity for organisations to set up guardrails and develop comprehensive strategies that leverage the power of GenAI while ensuring proper governance and a clear understanding of its impact.
What challenges do you think security leaders face in understanding the broader implications of Gen AI within their organisations?
One of the main hurdles is the need for a nuanced understanding of how GenAI operates, which is essential for accurately assessing its risks and benefits. This is crucial as integrating GenAI can blur traditional security boundaries, creating new attack vectors that may not be well understood or anticipated.
While 91% of security teams are using GenAI, 65% of our research respondents admitted that they don’t fully grasp its implications, indicating that they are seeking a better way to enhance their operations. However, there’s also a risk of over-reliance on GenAI as a security solution. While it can boost efficiency and offer valuable insights, it shouldn’t replace foundational security practices or human intuition.
It’s critical for security teams to remember that GenAI is a tech enabler and users should remain in the driver’s seat when making critical decisions. Instead, organisations should remember to maintain basic cybersecurity practices, like regular updates of IT asset inventories, to mitigate risks and improve long-term compliance.
What role does Gen AI play in filling the cybersecurity skills gap, and how might this impact the overall talent landscape?
GenAI is stepping up to bridge the cybersecurity skills gap, especially as having skilled professionals is crucial for any Security Operations Center (SOC). Many organisations are still dealing with talent shortages, and GenAI could be just what they need.
In our research, we’ve learned that GenAI has helped attract more entry-level talent, with 86% of cybersecurity leaders believing it will make a difference. 90% of respondents shared that once these new hires are onboarded, they rely on GenAI to support their skill development within the SOC. This could mean assistance with basic tasks like writing Python scripts or setting up test environments, which helps them hit the ground running.
But it’s not just about new hires – GenAI also acts as a game changer for seasoned professionals, boosting their productivity, allowing them to process information more efficiently and speeding up research and detection efforts.
While some concerns about AI potentially displacing jobs are valid, it seems more likely that GenAI will help organisations train new talent and reduce employee burnout. It might even change the cybersecurity talent landscape by creating new roles, like prompt engineering. Overall, integrating GenAI into cybersecurity practices is expected to make talent acquisition and ongoing professional development more efficient.
How are changing compliance mandates affecting the strategies and priorities of cybersecurity leaders?
Changing compliance mandates are significantly influencing the strategies and priorities of cybersecurity leaders today. As the landscape evolves, security professionals find themselves navigating a tricky web of strict requirements, which raises the stakes significantly. Many leaders recognise that the regulatory environment will impact their work in expected and unexpected ways. 87% of respondents from our xxx report anticipate that a year from now, they’ll approach compliance quite differently.
While compliance and cybersecurity can coexist, there are worries that one might get sacrificed for another. This concern is echoed in our CISO Report, where 84% of CISO respondents expressed anxiety about personal liability for cybersecurity incidents. They also worry that their boards equate strong security with regulatory compliance rather than traditional success metrics.
The personal liability tied to compliance is making the cybersecurity field less appealing for many professionals; 76% cite this as a major factor, and 70% are even thinking of leaving the field due to job-related stress. Plus, 62% report that they’ve been directly affected by compliance mandates requiring the disclosure of material breaches.
In light of these challenges, the majority of security professionals plan to realign their budgets to prioritise compliance regulations over traditional security best practices. This highlights a critical adaptation in the cybersecurity landscape, where compliance has become a key part of strategic planning.
What steps should organisations take to ensure they remain competitive in the race to securely deploy Gen AI in their cybersecurity operations?
Achieving a careful balance between innovation and security is essential for enterprises looking to remain competitive while securely deploying GenAI. Establishing a well-defined policy and identifying specific business and security use cases can set them ahead. For instance, data leakage emerged as a top concern for the majority of the respondents in Splunk’s State of Security 2024 report. This brings about the need for targeted policies that address specific risks.
Secondly, fostering collaboration and breaking down silos within teams will enable a smooth deployment of GenAI in cybersecurity operations. Many companies face challenges due to an overload of disparate systems. Consolidating security tools can help streamline efforts, achieve visibility and focus on tackling meaningful threats.
Lastly, note that close collaboration with legal and compliance departments has become increasingly important. Many security professionals already integrate compliance into their responsibilities, and working closely with these teams ensures optimal preparedness. Engaging in simulation exercises, such as tabletop scenarios, can help uncover security and compliance gaps proactively, ultimately strengthening internal processes