India embraces AI, but experts warn of need for strong governance

Artificial Intelligence (AI), widely considered as the fourth industrial revolution by experts, is rapidly gaining momentum in India, with the technology finding its way into everything from healthcare to banking. But as excitement builds, so do worries about data privacy, bias in AI systems, and even job losses. Recent cases, like law enforcement, misusing facial recognition and AI-powered fraud schemes, have people spooked.

The need for AI governance

Experts are urging for stricter rules and regular check-ups for AI – in other words, governance and audits. Without them, they warn, things could go very wrong. AI could widen existing social gaps and leave people’s personal information exposed. Naseem Halder, Head of Cybersecurity and Compliance, Slice, says, “The lack of robust governance mechanisms could lead to significant breaches of privacy and misuse of AI, resulting in a loss of trust among the public.”

“AI systems process vast amounts of personal data, and without strict governance, data protection can be inadequate, resulting in unauthorised access, misuse of sensitive information, and losing customers’ trust,” adds Gautam Goenka, Vice President of Software Engineering and Site Head, UiPath.

“Inconsistent standards lead to a fragmented landscape with varying AI development approaches, hindering interoperability and universal ethical norms,” notes Kishore Seshagiri, Chief Digital Officer, Broadridge India.

The pressure for responsible AI is especially high as the technology starts to play a bigger role in crucial areas like healthcare, education, and finance. With AI advancing so quickly, making sure these systems are built and used ethically and openly is more important than ever. “By 2025, the concentration of pretrained AI models among 1% of AI vendors will make responsible AI a societal concern,” says Anushree Verma, Director Analyst at Gartner.

AI fuels competitiveness, fosters economic growth, drives social advancements, and promotes environmental well-being. Across the globe, nations are strategically leveraging data and intellectual property (IP) to gain a competitive edge. For India, a robust AI capability will be a key driver in the knowledge-based economy. India is reportedly positioned among the top three global leaders (alongside the US and China) in developing and refining AI technologies. India is also lauded for having over 58% of its AI applications in the implementation stage, surpassing the pilot and testing phases.

Building trustworthy AI systems

“India’s approach to AI must consider its current strengths and weaknesses, necessitating large-scale transformational interventions primarily led by the government, with robust support from the private sector,” shares Anil Pawar, Chief AI Officer, Yotta Data Services. The risks associated with the lack of strong AI governance in the country are multifold but are primarily linked to privacy. The entire AI landscape is backed by the generation, collection, and processing of large amounts of data on individual, entity, and community behaviour. “Data collection without proper consent, privacy of personal data, inherent selection biases resultant risk of profiling and discrimination, and non-transparent nature of AI solutions are some of the issues requiring deliberation and proper recourse,” he adds.

Sikhin Tanu Shaw, CIO, Can Fin Homes, emphasises the importance of designing AI systems with features that ensure transparency, explainability, and accountability while also addressing bias and data privacy concerns through specific technical solutions. He adds, “Transparency can be maintained through proper documentation and reporting, while explainability can be achieved by using techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Addictive exPlanations). Accountability can be ensured by implementing audit trails, which include logging and monitoring mechanisms, version control, feedback, and reviews.”

To address bias and privacy concerns, Shaw suggests solutions like bias detection and mitigation through fairness metrics and bias audits, the use of diverse training data with inclusive datasets and data augmentation, and privacy-preserving techniques like differential privacy using TensorFlow Privacy. Additionally, access control and encryption methods, including data encryption protocols like TLS 1.2 and above and role-based access control, can be utilised. Halder adds, “Ensuring that AI systems are not only technically robust but also ethically sound is crucial. This requires ongoing monitoring and adjustments to tackle new challenges as they arise.” Goenka further elaborates that bias in AI systems can lead to unfair treatment and exacerbate existing social inequalities. For example, biased algorithms in hiring or loan approvals can put certain groups at a disadvantage, worsening social disparities.

The road to responsible AI development

Implementing strong AI governance has become fundamental to mitigate the unit risks, ensuring ethical, secure, and compliant use of AI technologies in India’s rapidly digitising sector.

Experts opine that the governance structure for AI must include a National AI Council, with representatives from industry, government, academia, and civil society, to oversee policy development. Additionally, a data protection authority is essential to enforce privacy regulations and handle grievances. Sector-specific regulatory bodies are needed to oversee AI applications in healthcare, finance, and transport.

“AI audit mechanisms are required to incorporate data protection audits, bias and fairness audits, and algorithm impact assessments,” says Shaw. Implementing these requires comprehensive AI and data protection laws, capacity building through training programs, and fostering public-private partnerships to develop practical governance frameworks. Leveraging technology solutions to automate parts of the audit process will enhance efficiency and scalability, while regular stakeholder engagement is essential to gather feedback and make necessary adjustments to the governance and audit frameworks. “International collaboration and the establishment of legal frameworks with government penalties for non-compliance ensure global accountability and support ethical and secure AI deployment,” shares Halder.

“Traditional command-and-control approaches will stifle innovation, while complete autonomy will yield unpredictable results,” shares Verma. This is the challenge with the policymakers right now. Meanwhile, transparent accountability leads to good governance. Without governance and accountability, policies and regulations are worthless. The increasing complexities involved with the use of AI technology mandate an approach that creates structure out of chaos. 

For this reason, Verma shares, some policymakers and regulators are evaluating a three-pronged approach to AI governance that includes audits. This approach is structured with the following elements: 

Element No. 1: Define a risk-informed AI governance framework.

Element No. 2: Define a range of categories based on this risk-based framework for intent and purpose.

Element No. 3: Define roles and responsibilities/decision rights and accountability. So, they will need to design risk tolerance, use cases and restrictions, decision rights and disclosure obligations for the organisations to disclose.

AI has already demonstrated its potential to transform economies, highlighting the need for India to adopt strong measures for its smooth and ethical governance. According to a recent study by IBM, seven out of 10 Indian CEOs surveyed believe that trusted AI is impossible without effective AI governance in organisations. However, only four in 10 Indian CEOs report having good GenAI governance in place today.

“At the application level, sensitive and confidential information faces threats, and malicious actors are increasingly gaining access to powerful tools,” adds Pawar. As a result, focusing on the governance of the tech stack is crucial, but it may also be beneficial to regulate the organisations developing AI solutions and the individuals behind the technology. While laws like the Digital Personal Data Protection Act (2023), the Information Technology Act (2000), and the Code Criminal Procedure (1973) can be leveraged to fill the gaps in AI regulation, there is a dire need for skilled professionals who can develop, implement, and oversee AI governance frameworks. This requires significant investment in education and training of professionals in this field.

Regulatory uncertainty could hinder the development and deployment of AI technologies due to lack of clear guidelines. Therefore, as AI continues to transform economies and societies, India needs to prioritise strong AI governance to ensure ethical, secure, and responsible development and deployment of AI technologies. This requires a multi-faceted approach that includes robust governance frameworks, data protection laws, sector-specific regulations, and international collaboration. By implementing effective AI governance, India can harness the potential of responsible AI. The time for action is now, as the future of AI in India hangs in the balance. Will we rise to the challenge and establish a robust AI governance framework, or will we risk falling behind and exposing our citizens to the dangers of unregulated AI? The choice lies in the hands of the state and technology leaders combined.

AIAI GovernanceResponsible AIsecurity
Comments (0)
Add Comment