Security is no longer an afterthought; it must be embedded into every stage of AI development: Sujatha S Iyer, Zoho Corporation
Integrating security into every stage of AI development is essential to protect against evolving threats and ensure compliance
AI is reshaping industries at a rapid pace, but with this transformation comes growing concerns around security. In a recent interaction with Express Computer, Sujatha S Iyer, Head of AI Security, Zoho Corporation, offers an in-depth perspective on the importance of building security into every phase of AI development. She discusses Zoho’s approach to ensuring the security and ethical use of AI, the challenges companies in India face, and the evolving threat landscape driven by AI-powered attacks.
AI is transforming industries at an unprecedented pace, but with this rapid growth comes significant security concerns. As the Head of AI Security at Zoho Corporation, can you share your perspective on the current AI security landscape and what key trends are shaping the industry today?
Traditionally, security relied on rule-based systems and static thresholds, meaning it was largely reactive. You would only address issues after they occurred. However, with increasing focus on privacy and security, we now live in a security-aware world where any breach can severely damage a company’s reputation and result in significant fines. Today, AI in security has shifted from being a “nice to have” to a “must-have.” AI is no longer limited to static thresholds—it dynamically creates thresholds tailored to specific data. This allows organisations to proactively detect issues before they escalate into major incidents.
In the past six to seven years, major companies have been undergoing digital transformation. In the last three to four years, there has been a shift towards automation and incorporating AI. Many companies claim they are AI-ready. However, do you believe that investing in security development and posture is as important as other aspects of being AI-ready?
Absolutely. Security should never be an afterthought. It needs to be integrated into every process. There must be a cultural shift where security becomes an integral part of both business and product development, including AI model development. It’s crucial to ensure that no security mishaps occur by securing every stage of AI development.
For instance, data in your systems should be encrypted, and when feeding it into an AI model, it should be anonymised to avoid inadvertently processing any personally identifiable information (PII). Incorporating security throughout the entire process ensures a stronger security posture with AI. This prevents the need for back-and-forth adjustments later, as security is already embedded from development to production. Security must become part of the organisation’s culture.
What do you think makes India a lucrative market for cyber attackers? What factors contribute to this attractiveness? Where are companies in India falling short in terms of security?
In terms of enterprise security, the widespread accessibility of AI is a key factor. GenAI can be used to create highly convincing phishing campaigns, making it easier for attackers to deceive people. As the saying goes, “set a thief to catch a thief,” so we believe in tackling AI-driven threats with AI itself. Today’s malware is much smarter and can evade traditional antivirus systems, making it essential to integrate AI into security measures. By leveraging AI to learn and understand malware behaviour, companies can significantly improve their security posture and better defend against evolving cyber threats.
How is Zoho ensuring the security and ethical use of AI across its vast suite of enterprise applications, and what unique security frameworks or protocols does the company follow?
At Zoho Corporation, we take privacy very seriously, and our focus on privacy started long before it became a widespread concern. We do not rely on ad revenue, and we own the entire tech stack, from data centres to applications. This gives us complete control over our data, as there is no third-party data transfer; everything resides within our data centres.
Our AI models are built in-house and are trained only on open-source, commercially usable datasets, ensuring compliance with various laws. We are already compliant with GDPR, HIPAA, and other regulations. As new laws like India’s DPDP Act and the EU AI Act emerge, we have processes in place, such as conducting regular Data Protection Impact Assessments (DPIAs) where we document everything, including the model used, training datasets, data flow, risks, and mitigation measures.
We also perform frequent internal audits and external audits to ensure compliance with the laws across all geographical regions where we operate.
You mentioned your data centres. Are they on-premises, or are they cloud-based?
We have our own data centres in regions like the US, Australia, India and so on. However, some enterprises prefer to host solutions in their own data centres or on private cloud deployments, and we accommodate that on a case-by-case basis.
So their security part is also being taken care of by you?
Since the application is hosted in their data centres, we handle the application security, but the company is responsible for securing the data centre itself.
What advice would you give to CISOs and security leaders to stay ahead of the curve?
My advice to CISOs and security leaders is to avoid getting caught up in hype cycles. Just because technologies like GenAI are popular doesn’t mean they’re the right solution for everything. Not every problem, especially in security, requires a large language model (LLM). For instance, anomaly detection, which is crucial in identifying potential threats like DDoS attacks, doesn’t need LLMs. Traditional machine learning techniques can handle such tasks effectively, using smaller, faster models that require less computational power. The key is to carefully evaluate each use case and choose the technology that best fits the need, focusing on delivering value rather than following trends.