Tech Firms Face Test On Their Algorithms For Bias

Artificial Intelligence AI

Computer algorithms and artificial intelligence systems will be required to show they are neutral on race, gender and other biases

Tech firms face a potential new law in the United States to test their computer algorithms for any hint of bias.

The US Congress is to consider a draft bill from Democratic Senator Ron Wyden, in which he proposes that tech firms of a certain size have to show their computer algorithms are free of bias surrounding race and gender etc before they are implemented.

The US is not the first in this regard. Last month for example the British government ordered an independent watchdog to conduct an investigation to explore the potential for bias with algorithmic decision-making or AI in criminal and justice cases.

US proposal

The proposed bill called the ‘Algorithmic Accountability Act of 2019‘ proposes to direct the Federal Trade Commission (FTC) to require “entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.”

It seeks to make sure that ‘automated decision systems’ of these firms have to have an impact assessment on the system itself, and its development process to check for “accuracy, fairness, bias, discrimination, privacy, and security.

Not a small test then, and some are concerned the bill could be used to restrict algorithmic decision-making or artificial intelligence (AI), despite the fact these systems are commonly used nowadays to deliver targetted adverts for example.

Other areas where computer algorithms are used is in recruitment, where it can screen CVs and shortlist candidates.

Essentially, the US senator wants to apply this algorithms bias checkup to any firms with annual sales of $50m (£38m), or which hold data on more than one million people.

Bias concerns

The increasing use of computer algorithms and AI in everyday systems has driven concerns about the fairness of this technology.

As far back as 2016, the British government has been concerned about the impact that AI would have on privacy, transparency and data use.

And the commercial world is taking note of these concerns.

Last September for example IBM rolled out a cloud-based tool designed to detect bias in artificial intelligence models, and give the organisations using them better visibility into why they are making the decisions they do.

Put your knowledge of artificial intelligence (AI) to the test. Try our quiz!