Government Told To Open Up On AI Decision-Making

The Tech Of Crime Part 2: Policing And Data

Government advisory body publishes roadmap to ‘responsible’ use of AI and algorithms, following summer chaos over computer-determined school grades

Government bodies must be more open about how algorithms are used in decision-making, an advisory body has said.

The Centre for Data Ethics and Innovation published a roadmap for increasing government transparency on the use of algorithms, while recommending that data be actively used to identify and mitigate bias.

The CDEI also said the government should issue guidance clarifying how the Equality Act applies to decisions made with the aid of computer algorithms.

The review comes after a public outcry in the summer over the use of an algorithm to “moderate” school grades, after exams were cancelled.

AI, Evil parliament (c) pisaphotography, Shutterstock 2014Exam chaos

Exam regulators were forced to back down over algorithmically determined  A-level and GCSE results that saw nearly 40 percent of marks downgraded.

The regulator was accused of breaching anti-discrimination legislation and failing to uphold standards.

And last month the government was forced to backtrack on parts of its planning reforms after MPs accused ministers of using a faulty computer-based formula to decide house building targets.

The CDEI study, aimed at determining ways to combat the risks of algorithmic bias, analysed computer-aided decision making in financial services, local government, policing and recruitment.

The body’s roadmap is aimed at improving accountability and transparency, while creating an ecosystem of industry standards and professional services that allow algorithms to be used safely.

AI ‘opportunity’

The area presents an opportunity for the UK to take the lead in an emerging field, the CDEI said.

“Leadership in this area can not only ensure fairness for British citizens, but can also unlock growth by incubating new industries in responsible technology,” the report found.

The CDEI has initiated a programme of work on AI assurance aimed at identifying what is needed to develop strong AI accountability in the UK, while working with the Government Digital Service to pilot an approach to algorithmic transparency.

The body is also supporting a police force and a local authority to apply lessons learnt and develop practical governance structures, while building public engagement.


“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases,” said CDEI board member Adrian Weller.

“Government, regulators and industry need to work together with interdisciplinary experts, stakeholders and the public to ensure that algorithms are used to promote fairness, not undermine it.”

The Information Commissioner’s Office said algorithms can be used responsibly to “transform society for the better”, and recommended organisations make use of the ICO’s own guidance on the use of AI.