Trades Union Congress warns workers are being left without protections from life-changing decisions made by automated AI tools
The UK government is failing to protect workers against artificial intelligence (AI), which is already used to life-changing decisions across the economy, the Trades Union Congress warned on Tuesday.
The TUC singled out the government’s Data Protection and Digital Information Bill, which reached its second reading in parliament on Monday, saying it would dilute important protections.
One of the bill’s provisions would narrow current restrictions on th use of automated decision-making without meaningful human involvement, while another could limit the need for employers to give staff a say in the introduction of new technologies through impact assessments, the TUC said.
This, combined with a “vague and flimsy” government white paper on the technology published last month raises concerns that “guard rails” in the workplace are becoming nonexistent, the TUC said.
It said the government’s hands-off approach to AI meant there was “no additional capacity or resource to cope with rising demand”.
The TUC said it had found AI being used at all stages of the employment process, from initial sifting of CVs to team allocation, allocation of work, disciplinary measures, through to termination.
AI could “set unrealistic targets that then result in workers being put in dangerous situations that impact negatively on their both physical health and mental well being”, the body warned.
The government responded that the TUC’s assessment was “wrong”, arguing that AI was “set to drive growth and create new highly paid jobs throughout the UK, while allowing us to carry out our existing jobs more efficiently and safely”.
‘Safely and responsibly’
The government said it was “working with businesses and regulators to ensure AI is used safely and responsibly in business settings” and that the Data Protection and Digital Information Bill included “strong safeguards” employers would be required to implement.
The AI white paper proposed spreading regulation of the technology across different existing bodies rather than creating a new dedicated agency or new laws.
The approach contrasts with that of the EU, which is working on far-reaching regulation called the AI Act.