Express Computer
Home  »  News  »  Responsible AI starts with transparency

Responsible AI starts with transparency

0 37

By: Eddie Pettis, Senior Distinguished Engineer, Technology and Architecture, Office of the CTO, Equinix
Haifeng Li, Senior Distinguished Engineer, Technology and Architecture, Office of the CTO, Equinix

In today’s enterprise AI use cases, success depends on trust, and trust depends on transparency

AI models are already driving how decisions get made in enterprise settings, but there’s potential for them to do even more. Companies are looking for opportunities to apply AI to optimise every aspect of their operations, and they know they need to do it before the competition does.

One challenge preventing more AI adoption is a lack of transparency. Many modern AI models are essentially black boxes; no one can truly explain why they return the results they do. This fact limits the responsible application of AI across industries. For example, AI can help doctors make informed diagnoses quicker, but how can doctors trust AI models that they don’t truly understand? The bottom line is that enterprise-grade AI models won’t be considered viable unless they’re trusted, and they won’t be trusted unless they’re transparent.

It’s easy to see why we need AI transparency. The question of how to build a transparent AI framework is much more complex. It will require collaboration among many different parties, including the operators, providers and consumers in emerging AI data and model marketplaces. Building AI transparency isn’t a single-point operation; it’s a journey with many steps. We’ll describe a few of the steps in that journey below.

It all starts with data

Like any other aspect of AI, data is fundamental to transparency. We can’t begin to explain the results of an AI model without first looking at the data used to train that model. The source and lineage of datasets are like a roadmap that can illustrate how a model ended up at a particular outcome.

This fact is particularly challenging with foundation models in the current AI landscape. Today, most foundational AI models have been trained on data scrubbed from the public internet, so it’s essentially impossible for users to understand the dataset at web scale. Even the model providers themselves aren’t always able to fully understand the composition of their own training data when it’s pulled from so many different sources across the entire internet. Even if they were, they wouldn’t be required to disclose that information to model users.

This lack of data transparency is one reason that using publicly available AI models may not be appropriate for enterprises. However, there are ways to work around this. For instance, you can build proxy models, which are simple models used to approximate the results of your more complex AI models. Building a good proxy model requires you to balance the tradeoff between simplicity and accuracy. Nevertheless, even a very simple approximated model can help you understand how each feature of a model impacts its predictions.

Control builds confidence

When it comes to building trust, it’s impossible to fully separate your AI models from the humans who use them. Humans naturally want to have some control over the tools they use; if you can’t give employees that sense of control, it’s unlikely they’ll continue to use AI.

This phenomenon is known as algorithm aversion. The seminal research paper on this topicfound that giving users control helps overcome algorithm aversion. Notably, users in the study didn’t ask to throw out the algorithm results altogether. Giving users even a very small amount of control—essentially, letting them choose whether to accept an outcome or tweak it slightly—was enough to build confidence.

Avoiding algorithm aversion is especially important for high-risk, low-volume use cases that require a human touch. High-volume use cases are typically automated to begin with, and AI is just another method of automation. For these use cases, ‘explainability’ still has a role to play in debugging models, but acceptance is less of an issue.

Embracing probability

Giving users confidence levels of predictions is another step that can improve transparency. In fact, most forecasting models already produce a posterior probability or confidence interval under the hood—they just convert them into specific values. For instance, suppose a retailer uses a model that tells them they need to stock 100 units of a particular product every month to meet customer demand. In reality, the retailer most likely needs between 75 and 125 units a month. For simplicity, the model just gives users the midpoint of that range.

While it may feel more intuitive to give users a specific value to act upon, it’s also less transparent. Suppose the retailer only sells 80 units in the first month. If they were acting based on the specific forecast of 100 units, they’d feel like the model failed. If they had the full range of probable values, they’d know it was correct. Over time, they’d see that the model is usually correct, and their confidence would increase.

For ranking problems such as recommended actions for data center control, it can be helpful for the model to share several relevant outcomes, not just one recommendation. This increases the likelihood that human users will see the outcomes they expected to see, which in turn helps build confidence in the model.

Setting expectations

It’s also important to understand that different models are used for different purposes, which will logically lead to different outcomes. For instance, certain use cases—such as managing large manufacturing plants—are typically too complex to simulate reliably. Instead, users tend to rely on reinforcement learning, a subset of machine learning that focuses on optimising expected outcomes. Success with reinforcement learning relies on the tradeoff between exploitation and exploration.

Exploitation means acting upon the knowledge you already have in order to get the best possible results. In this case, the model should behave predictably. In contrast, exploration is about gaining new knowledge. The model will inevitably take actions that seem unusual to human observers, but that’s by design. The model is essentially going through a trial-and-error process. The only way it can identify the ideal action for a particular scenario is to first simulate taking all the wrong actions in a controlled environment.

When a model is exploring, it’s important for human users to know that, so that they’re not surprised when they see it exhibiting unexpected behavior.

Centering the human user

Finally, AI models should always present outcomes as recommendations to act upon. This gives human users the ultimate sense of control: knowing that AI models are there to augment their capabilities, not replace them. In our view, AI stands for augmented intelligence.

Returning to our example about doctors using AI to diagnose patients: We know there has to be transparency in order for the doctor to trust the model. But there must also be transparency for the patient to trust and consent to an AI-driven care plan. The patient needs to know they’re receiving a diagnosis based on both the model’s recommendations and the doctor’s years of training and experience. Getting this informed content would be an essential part of using AI responsibly in the healthcare sector.

Build AI models and datasets in a trusted environment

Building a trusted approach to AI requires enterprises to track data provenance and lineage and build models in a way that gives users confidence and control. These are both things that would be practically impossible to do using public AI models. That’s why many enterprises are turning to private AI. This means building their own models that are hosted on private infrastructure and trained on proprietary datasets only.

 

Get real time updates directly on you device, subscribe now.

Leave A Reply

Your email address will not be published.

LIVE Webinar

Digitize your HR practice with extensions to success factors

Join us for a virtual meeting on how organizations can use these extensions to not just provide a better experience to its’ employees, but also to significantly improve the efficiency of the HR processes
REGISTER NOW 

Stay updated with News, Trending Stories & Conferences with Express Computer
Follow us on Linkedin
India's Leading e-Governance Summit is here!!! Attend and Know more.
Register Now!
close-image
Attend Webinar & Enhance Your Organisation's Digital Experience.
Register Now
close-image
Enable A Truly Seamless & Secure Workplace.
Register Now
close-image
Attend Inida's Largest BFSI Technology Conclave!
Register Now
close-image
Know how to protect your company in digital era.
Register Now
close-image
Protect Your Critical Assets From Well-Organized Hackers
Register Now
close-image
Find Solutions to Maintain Productivity
Register Now
close-image
Live Webinar : Improve customer experience with Voice Bots
Register Now
close-image
Live Event: Technology Day- Kerala, E- Governance Champions Awards
Register Now
close-image
Virtual Conference : Learn to Automate complex Business Processes
Register Now
close-image