By: Ashok Panda, Vice President, Global Head, AI & Automation Services, Infosys
AI is revolutionising innovation across industries by enhancing operations and processes. In manufacturing, AI enables real-time operational insights, optimising production and improving supply chain resilience through machine learning and image recognition. Financial institutions leverage AI for more comprehensive credit evaluations and personalised financial advice, utilising machine learning and generative AI. Additionally, AI’s analytical capabilities empower brands to deeply understand customers, personalising engagement and enhancing experiences across channels.
Data is the backbone of AI, yet it exposes organisations to risks such as data security, privacy violations, and biases. Insufficient training on data and data inaccuracy, which if left unaddressed can create financial and reputational losses, invite regulatory action and/or penalty, or even disrupt business continuity. While AI presents such challenges, it can also play a part in mitigating the risks when used in conjunction with other data security and privacy protection measures.
Here are some of the ways in which it does that:
AI employs differential privacy to collect real user data while keeping it untraceable and not personally identifiable. Differential privacy shields user identity and personal details by adding redundant information to the dataset, without compromising the output.
In machine learning, data is typically centralised for processing before training models. However, this data transfer poses security and privacy risks, particularly in sectors like healthcare and finance. Federated learning can address this challenge. It involves training machine learning models on decentralised data sources through a collaborative training process involving multiple machines or entities. Because the data never leaves its original source, there is minimal risk of breach of privacy.
To ascertain if AI innovation justifies its associated risks, organisations must assess the risks and implement necessary safeguards. This ensures they meet their security and privacy obligations while harnessing AI’s transformative potential.
Here are a few things to consider before making a decision:
Data privacy regulatory frameworks
Enterprises should ensure compliance with the various regulatory frameworks on data privacy, including the flagship General Data Protection Regulation (GDPR), which mandates that organisations collecting personal data allow its owners to access their profiles, rectify mistakes and even ask for the information to be deleted. The California Consumer Privacy Act (CCPA) has similar provisions, but also gives California residents the right to opt out of the sharing or sale of their personal data.
Responsible AI design
Data quality is a huge concern with AI; poor training data, which is wrong, incomplete, inconsistent or unfair can drive algorithms to spread falsehood, produce inaccurate outcomes or perpetrate bias. Lack of transparency and explainability in AI models compounds this issue and prevents understanding of how or why the AI system produced a certain outcome. The proposed federal Algorithmic Accountability Act seeks to enforce rigorous testing of AI models for (unfair) bias prior to deployment. Other frameworks for promoting safe, ethical, and responsible AI development include the OECD AI Principles and the (NIST) AI Risk Management Framework. Responsible AI by design is a holistic approach to design, develop, and deploy AI systems in such a way that Responsible AI practices are thought through right from the concept stage and is practiced throughout the design, development, and deployment stages. This helps in understanding the AI use case, assess its risks from the dimensions of privacy, fairness, explainability, safety and AI security to facilitate a transparent, trustworthy and an accountable AI system development.
Rewards without risk
AI stands as a transformative force with unparalleled potential for innovation. However, it also presents significant challenges, particularly around data security and privacy. To thrive in the digital age, organisations must strike a balance between harnessing AI’s possibilities and ensuring security and regulatory compliance. A strategic approach involves initially adopting AI through low-risk and high impact use cases, allowing organisations to reap early benefits while managing risks effectively. As their understanding deepens, they can progressively tackle more complex AI applications, implementing ethical safeguards to ensure responsible deployment and maintain trust.