By Priyadarshi Nanu Pany, Founder & CEO, CSM Tech
The European Union has unveiled the world’s first Artificial Intelligence Act, a watershed step in regulating AI. This law aims to create a unified framework for AI regulation, showing the EU’s dedication to ensuring safe, ethical and responsible use of AI. This landmark legislation strives to achieve the delicate balance between protecting fundamental freedoms & user privacy and fostering AI innovation.
As AI technology advances at breakneck speed worldwide, the EU AI Act sets a new gold standard, similar to how the General Data Protection Regulation (GDPR) did for data privacy. The impact of this legislation on businesses globally, especially in India, is extensive and varied.
The EU AI Act classifies AI systems into four risk levels: unacceptable, high, limited, and minimal. Unacceptable applications, like government social scoring, and real-time biometric identification and updation are outlawed. High-risk applications, such as those in critical sectors like infrastructure, education, and law enforcement, must meet stringent requirements like fundamental rights impact assessments, strong data governance and cybersecurity. Limited-risk applications have transparency obligations, while minimal-risk applications are not regulated. For example, developers of general purpose AI applications like ChatGPT should follow some transparency mandate and come up with a Code of Conduct. The makers of these large language models or multimodal models need to come clean on topics like how they trained the models, what kind of data was fed and report instances when the model failed or behaved aberrantly.
This risk-based approach ensures that regulation is proportionate, focusing on areas where AI poses the most potential harm. By requiring transparency and accountability, the Act aims to build trust in AI technologies, making them safer and more dependable for users. Like the GDPR, the EU AI Act is extraterritorial in scope, meaning foreign providers with EU users must follow its rules. This has significant implications for international businesses seeking access to the European market.
The Act’s rigorous standards are likely to influence AI regulations globally, as other countries may adopt elements of the EU’s approach to align with these new global norms. While the EU’s sweeping AI Act can act as the lodestar for the regulation universe, countries should frame laws, keeping in view their own dynamics. India, for example, is giving the final touches to the Digital India Act which also seeks to regulate AI. For India which aspires to be the global AI hub, the focus ought to be on regulating the least and supporting (the emerging AI ecosystem) the most.
For Indian companies operating in the AI sector, the EU AI Act presents both challenges and opportunities. Businesses with a presence in Europe, like Yellow AI, Sigtuple, Qure AI, and Cropin AI, must comply with the Act’s requirements, which can jack up their compliance burden and costs. This involves implementing risk management processes, conducting assessments, and maintaining strong data governance and cybersecurity measures. Failure to comply can result in hefty fines ranging from 7 million to 35 million Euros or 1 to 7 per cent of the company’s global turnover, whichever is higher.
However, the Act also offers opportunities. By meeting stringent EU standards, Indian businesses can position themselves as champions of safe and ethical AI. Compliance can be a powerful marketing tool, building confidence with international clients and opening up new markets. The Indian AI developers can mull teaming up with their EU counterparts which have a better grasp of the regulatory landscape.
Further, the Act encourages innovation through measures like regulatory sandboxes and reduced fees for startups, helping small and medium enterprises navigate regulations. Lessons from the EU’s approach could be valuable in creating a predictable environment for AI development in India, attracting investment and strengthening a vibrant AI ecosystem.
To navigate the EU AI Act, businesses must adapt proactively and plan strategically. Establishing compliance pathways, conducting thorough risk assessments, and investing in transparency and ethical AI practices are essential. Prioritizing human-centric and trustworthy AI will help businesses avoid penalties and build a reputation for reliability and responsibility in the AI market.
The EU AI Act is a crucial step in AI regulation, setting a benchmark that will influence global norms. For Indian businesses, this legislation presents both challenges and opportunities. By embracing the Act’s principles and striving for compliance, Indian companies can establish themselves in the European market and become leaders in ethical AI. While the journey may be complex, the destination—a world where AI is developed and deployed responsibly—is undoubtedly worth the effort.