TrustyAI – an open source project looking to solve AI’s bias

Rebecca WhitworthBy Rebecca Whitworth, Manager, Software Engineering at Red Hat. 

Artificial intelligence (AI) is an exciting area of technical development. As the tools and methodologies advance, many organisations are looking to use AI to improve business efficiencies, bring innovation to customers faster, gain actionable market insights and more.

However, the rush to put AI in place without always knowing what it can be used for, or how to use it, can lead to problems with the systems and the data itself. We have heard many stories of when AI makes the “wrong” decision due to built-in biases, and in some cases, the outcome can be life or death.

This is a challenge that open source project TrustyAI – built on Red Hat OpenShift and Kogito – is looking to address.

AI gone wrong

AI is essentially a set of priorities and decisions a system has been programmed to follow. And because humans are responsible for that programming, our flaws become the system’s flaws. This happens in two main ways. One is cognitive bias that comes from the person, or people, who train the data model and build bias into the AI logic. The second problem is a lack of data, or a lack of diverse data. For example, we’ve seen cases of datasets that are dominated by white males, meaning AI trained on this data filters out anyone who doesn’t fit these characteristics.

There is a growing amount of research about the transfer of bias from humans to AI. As AI becomes more prevalent, it is being used to decide critical life moments like whether you get into college, get a mortgage, qualify for a medical operation, and even for determining a prison sentence, which makes the negative outcomes easier to spot.

The data science community is very aware of the need to stop systems perpetuating bias, and all the big tech companies are now looking at this. IBM’s AI Fairness 360, Google’s “What-If Tool” and Amazon SageMaker Clarify all go to show how significant and competitive this field has become.

Identifying the problem

If you aren’t proactively looking for bias, you may not notice if the AI you are using for hiring is selecting people based on their gender and name instead of their technical skills and experience, for example. There hasn’t been a simple and reliable way to test the balance and accuracy of an AI model, so the first time a business knows it has an issue is when it gets some negative PR or someone brings a case against it. This black box opaqueness has also become a legal risk with the advent of the General Data Protection Regulation (GDPR), where anybody in Europe has the right to ask for their data and how it is being used.

TrustyAI is a standalone piece of software that plugs into any AI application, regardless of what environment or technology it’s running on. It introspects the system, looking at all the various inputs, and maps how these influence the outputs. That analysis is served up in easy-to-visualise charts, which make it clearer if any biases exist. The user can alter the values to see the effect of different criteria.

For example, with a rejected mortgage application, you’d expect a different outcome if the applicant had a higher credit rating or earned more money. But if TrustyAI finds that by only changing the applicant’s ethnicity or postcode you arrive at a different outcome, then it’s clear the system has bias. We call this Explainable AI — XAI for short.

TrustyAI in action

TrustyAI has been built with Kogito (an open source, end-to-end business process automation technology) and Red Hat OpenShift. So it is cloud-native and can be deployed to any system running on any environment. It is written in Java and integrates with Python, the language data scientists generally use, so a user only needs to open up a notebook and apply library functions to their datasets. There’s no reconfiguration required.

Open source has been critical to TrustyAI. Tapping into open communities has enabled us to develop at a faster pace. For example, KIE (Knowledge Is Everything), an open source community for business automation, has been a brilliant partner. Its repository hosts the TrustyAI code and its members continually perform testing, offer fixes and suggestions and share their datasets. Without this collaboration (our in-house project team is myself, three developers and one architect) progress would have been much slower.

We’re really excited about the impact that TrustyAI can have. Reputational management can make or break a business, and there’s a real clamour to mitigate the risks of AI. Understandably, businesses don’t want to slow down their adoption of AI, and TrustyAI helps give them the confidence to push forward, while also prioritising transparency and accountability.

As a project team, we are looking at how to apply TrustyAI across Red Hat’s portfolio to add strategic value — both internally and for our customers — while continuing our research on all things AI, from cognitive bias and ethics to developments in data science and algorithmic best practices.

About Red Hat

Red Hat logoRed Hat is the world’s leading provider of enterprise open source software solutions and services, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers develop cloud-native applications, integrate existing and new IT, and automate, secure, and manage complex environments. Follow Red Hat on Twitter: @RedHat

Posted in Partner news.