Express Computer
Home  »  Artificial Intelligence AI  »  Building Responsible AI Guidelines: A Framework for Organisations

Building Responsible AI Guidelines: A Framework for Organisations

0 33

By Sandeep Bhargava, SVP, Global Services and Solutions, Public Cloud Business Unit, Rackspace Technology

OpenAI’s ChatGPT has played a crucial role in integrating AI into mainstream use, speeding up the adoption and innovation of generative AI technologies that use foundational models that cover language, code, and visuals. According to a Deloitte survey, India ranks first in the adoption of generative AI technology across the Asia Pacific. 75% of Indians say that generative AI has great potential to elevate Asia Pacific’s role in the global economy.

Despite this, certain companies have banned these technologies from being used within their corporate systems. Considering the current pace of adoption and the advantages of AI use, total prohibitions on AI technologies will probably be ineffective. Organisations should instead establish controls and foster a culture of responsible AI usage by enhancing awareness and offering training that highlights the optimal use of these technologies and possible misapplications.

There should be a degree of responsibility when it comes to using AI. Responsibility in the context of AI is about creating, implementing, and utilising AI in an ethical, trustworthy, equitable, and impartial manner that is clear and advantageous for both individuals and society at large. It relates to employing AI as a support tool for decision-making, rather than as the entity making decisions.

The Layers of Trust

AI systems naturally contain layers of information integrated as part of the models in use. This makes it challenging to specify what information is communicated to the platforms for services that incorporate AI capabilities. Foundational models can be sourced from numerous providers, including proprietary and open-source options.

To understand how this would work in the real world, let’s look at GitHub Copilot. Developers utilising this product for AI-assisted programming must be aware of proprietary IP and data shared with the platform for collaborative code creation. The first layer of trust is the end user. That’s why organisations need to implement policies and governance related to the use of GitHub Copilot. This represents the second layer of trust that an organisation invests in utilising a software product.

In turn, GitHub utilises OpenAI for the core foundational model. As a result, GitHub establishes a higher level of trust with OpenAI. For the user to use it responsibly, they need to comprehend what information is gathered by GitHub and the foundational platform (here, it is OpenAI). This is where the levels of trust emerge — GitHub believes that OpenAI is acting appropriately, and the user trusts that GitHub and their colleagues are utilising the service responsibly.

Fundamentals of AI Ethics

There’s a great deal of enthusiasm about the new features being launched by generative AI; India is looking to increase its GDP by up to US$438 billion through the adoption of generative AI by 2030. There’s a growing need to enhance productivity in the hybrid workplace by employing AI co-pilots, alongside the desire to improve productivity in the workplace through secure, cost-effective, and scalable AI models.

However, it’s essential that these productivity improvements be carried out responsibly. Organisations can take the first step by making straightforward policies that are easy to understand. Data classification policies and guides can incorporate specific examples of information categorisation and the safe handling of data.

Teams should be educated on ethical AI principles and directives that include practical examples from the real world. These practices should be augmented with a system for overseeing the ethical application of AI, as well as a governance council that can prioritise, authenticate, and regularly update policy applications.

A responsible AI policy should cover the following parameters:

  • Governance and supervision through a committee with designated leaders to oversee, ensure compliance, conduct audits, and enforce the AI standard.
  •  AI software governed by the same global procurement and internal usage regulations imposed on other software products.
  • The responsible use, oversight, and transparency of AI models. This encompasses confirming validity, reliability, safety, accountability, transparency, the capacity to explain and interpret fairness, and the handling of detrimental bias.
  • Information classification standards that offer clear instructions on the use of AI services and guarantee the safety of intellectual property, regulated data, and confidential information.
  • Adherence to data management and retention guidelines that ensure compliance with corporate security and privacy regulations.
  • Sincere reporting of breaches of the AI standard
  • A Policy for the Greater Good

Not every organisation will have the same use case when it comes to AI technologies, but it’s important to adhere to best practices on secure and ethical usage of AI. Here are some fundamental values to keep in mind:

  • Develop and utilise AI for the shared benefit of the organisation.
  • Ensure fairness and eliminate bias by utilising algorithms, datasets, and reinforcement learning.
  • Ensure that stakeholders are responsible for any AI usage and utilise explainability as a fundamental principle in the model-building process.
  • Prioritise the secure handling of company data and intellectual property.
  • Compile and record all applications of models and data sets.
  • Oversee and verify the ethical utilisation of AI and data sets.
  • Focus on AI implementation to enhance productivity, boost operational efficiency, and drive innovation.
  • AI technologies can affect widespread transformation across multiple industries. It should always be used for the collective good. That’s why organisations must create policies and build a socially responsible environment that fosters innovation and prevents negative usage and implications of AI.

Get real time updates directly on you device, subscribe now.

Leave A Reply

Your email address will not be published.

LIVE Webinar

Digitize your HR practice with extensions to success factors

Join us for a virtual meeting on how organizations can use these extensions to not just provide a better experience to its’ employees, but also to significantly improve the efficiency of the HR processes
REGISTER NOW 

Stay updated with News, Trending Stories & Conferences with Express Computer
Follow us on Linkedin
India's Leading e-Governance Summit is here!!! Attend and Know more.
Register Now!
close-image
Attend Webinar & Enhance Your Organisation's Digital Experience.
Register Now
close-image
Enable A Truly Seamless & Secure Workplace.
Register Now
close-image
Attend Inida's Largest BFSI Technology Conclave!
Register Now
close-image
Know how to protect your company in digital era.
Register Now
close-image
Protect Your Critical Assets From Well-Organized Hackers
Register Now
close-image
Find Solutions to Maintain Productivity
Register Now
close-image
Live Webinar : Improve customer experience with Voice Bots
Register Now
close-image
Live Event: Technology Day- Kerala, E- Governance Champions Awards
Register Now
close-image
Virtual Conference : Learn to Automate complex Business Processes
Register Now
close-image