IBM Introduces Open Source Tool For Monitoring AI Bias

Cloud-based Fairness 360 toolkit is the latest effort at dealing with transparency and liability issues in corporate AI models

IBM is to roll out a cloud-based tool designed to detect bias in artificial intelligence models and give the organisations using them better visibility into why they are making the decisions they do.

The move highlights a growing focus on AI management within enterprises, with IBM and others arguing that liability issues around the technology are holding back large-scale deployments.

The issue extends beyond the business world as well, with Darpa, the Pentagon’s research arm, saying earlier this month it is planning a major investment in the area.

Darpa said research into “explainable” AI would be a significant part of a planned investment of $2 billion (£1.54bn) over the next five years into artificial intelligence research.

artifical intelligence, AITransparency

Darpa director Steven Walker said the ability for AI systems to be able to explain to humans how they arrived at a particular answer in real time was “critically important” for giving military commanders confidence they could rely on the technology.

Similarly, IBM’s Institute for Business Value found that 82 percent of enterprises were considering AI deployments, but 60 percent feared liability issues.

AI’s limitations in technologies such as facial recognition have caused concern, but IBM said concerns have been raised over the technology’s use in sectors such as the processing of insurance claims, credit scores and medical expenses.

The company said its open source Fairness 360 tool provides tutorials for these and other areas.

The platform provides a visual dashboard indicating how algorithms are making decisions and which factors come into play in making final recommendations.

It also tracks the model’s accuracy, performance and fairness over time, helping firms comply with regulations.

Automated decision-making

It builds on machine learning frameworks including Watson, Tensorflow, SparkML, AWS SageMaker and AzureML.

The toolkit provides a library of algorithms, code and tutorials that academics, researchers and data scientists can integrate into their models. The tools are available on GitHub.

“We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision-making,” said IBM Cognitive Solutions senior vice president David Kenny.

IBM said it is planning to provide feedback indicating how different decision-making factors are weighted, confidence in recommendations, accuracy, performance, fairness and lineage of AI systems.

IBM Research has also proposed introducing a transparency rating system for AI services, similar to an UL rating.

Google, Microsoft and Facebook are amongst the other companies developing tools aimed at making it clearer what factors are used in AI-assisted decisions.

Earlier this year the House of Lords published a report into AI in which it recommended a code of ethics for the systems.