Home » Artificial Intelligence AI » Ethical AI: Establishing guardrails in generative technologies
By Sajai Singh, Partner, JSA Advocates and Solicitors
Artificial Intelligence (AI) and its more recent Generative applications (LLMs/SLMs) have produced multiple opportunities globally and hold huge potential to revolutionise industries and transform multiple sectors by solving complex problems through extensive data analytics and enhanced efficiencies to produce significantly more nuanced and productive outcomes.
However, AI deployment raises profound ethical issues, particularly in sensitive spheres such as public safety and healthcare. Herein, it is essential to ensure robust ethical compliance to prevent misuse. Ethical concerns arise because AI systems have the potential to embed human biases, worsening climate degradation and threatening human existence, directly and indirectly. Moreover, AI-linked risks are said to be increasing existing inequalities, leading to greater harm for already marginalised groups.
Accordingly, building guardrails is imperative to strike a delicate balance between innovation and ethical responsibilities. In this way, it is possible to harness the transformative power of AI while driving positive social impact, mitigating potential risks and safeguarding against any harsh, unintended consequences.
Understanding AI guardrails
Typically, guardrails refer to the rules, methods and guidelines established to ascertain that AI systems function in a safe, ethical manner while remaining within predetermined limits.
Guardrails are similar to safety barriers on highways, steering AI away from the possibility of unintentional harm and guiding them towards beneficial outcomes. Besides protecting against security vulnerabilities, guardrails could prevent AI from generating inappropriate, misleading or harmful content. AI guardrails act as essential safeguards to make sure these systems operate within legal and ethical parameters and to prevent any inadvertent harm to users.
AI guardrails: Progress and risks
Internationally, governments have been racing to build guardrails to regulate the development of artificial intelligence while ensuring the delicate balance between privacy and innovation is not stifled. The European Union has led with the Artificial Intelligence Act while lawmakers in the US have had extensive public consultations with the industry.
However, in the absence of a standardised approach, each geography, platform and organisation could end up developing a guardrail framework that, besides being an extremely complex, cost-intensive task, may end up curtailing AI’s huge potential.
As firms in various verticals use generative AI, the need for deploying AI guardrails is becoming increasingly critical. AI systems such as LLMs (e.g. BERT and GPT-3) are getting integrated into people’s daily lives and enterprise operations, thereby significantly increasing the risk of misuse or malfunction. Effective AI Guardrails are needed to ensure AI is used ethically and responsibly to reduce risks from unexpected outcomes and ensure customers don’t have bad experiences.
Ensuring legal compliance and maintaining public trust are vital, for instance in healthcare and other areas that deal with the masses. Despite generative AI’s power-packed capabilities, there is always the possibility that research data or vital information related to clinical trials may be leaked, violating the privacy of patients or volunteers. AI algorithms could also be tricked into generating wrong results through adversarial input examples. Given such startling scenarios, human intervention and supervision should be mandatory.
Dangerous dimensions of data poisoning
Or consider data poisoning – a way of injecting false, misleading or biased information into AI training sets to induce prejudiced results or tainted output from social media or AI systems. The prevalent data poisoning in current social media and allied systems can lead to fake news and interference in elections. This can boost the viral spread of violent, unscientific, unsubstantiated and inflammatory content. Such posts can then be rated highly due to likes and reposts that are often surcharged by bot armies.
Data poisoning during elections can cause societal harm and even threaten democratic rule as the existing information merges with the data stream used to train AI systems that have been fed unfiltered, biased data. Therefore, data gets poisoned at the source, leaving no remedy except to cure this by eliminating existing biases.
The importance of evolving, dynamic norms
These scenarios emphasise the significance of establishing guardrails for all AI use cases. Ascertaining proper guardrails requires a comprehensive approach covering a series of stages in the design and development of AI. It also means defining the ethical principles and conducting a thorough risk assessment as well as designing for fair, transparent and accountable operations. Besides, guardrails must cover elements such as data governance and bias mitigation while embedding strong security systems to safeguard against prospective threats and misuse.
Considering the potential usage of generative AI across segments, the creation of guardrails must be a multifaceted, collaborative endeavor that involves various stakeholders from an array of industries. These should include AI developers, researchers and academics, tech companies, industry experts, government bodies and regulatory agencies, civic entities and ethicists as well as end users and the general public.
Since AI technology is constantly evolving, the creation of AI guardrails represents an ongoing process where constant collaboration and adapting to new technology developments, changing societal values and legal norms are all necessary. It is only through such a vibrant model that AI guardrails will retain relevance, efficiency and effectiveness in the continuously evolving world of generative AI.
Get real time updates directly on you device, subscribe now.
Express Computer is one of India's most respected IT media brands and has been in publication for 24 years running. We cover enterprise technology in all its flavours, including processors, storage, networking, wireless, business applications, cloud computing, analytics, green initiatives and anything that can help companies make the most of their ICT investments. Additionally, we also report on the fast emerging realm of eGovernance in India.