Governement-ITLegalRegulationSurveillance-IT

Government Needs ‘Careful Consideration’ To Avoid The Pitfalls Of AI

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Follow on:
Google + Linkedin Subscribe to our newsletter Write a comment

Government report on the future of Artificial intelligence (AI) says Whitehall must consider impact of the technology on privacy, jobs and other areas

The government needs to proceed with caution and transparency it the UK is to get the most out of the evolution of artificial intelligence.

A report from the Government Office for Science titled Artificial intelligence: opportunities and implications for the future of decision making, championed the benefits AI can bring in terms of automation, data crunching and assisting with decision making and the allocation of resources in government and multiple industries including the healthcare, transport and retail industries.

However, the report warned Whitehall  proceed carefully with its own adoption of AI technology and that of the private sector on topics such as data protection, use and transparency,

It also suggested government  figure out who is accountable for AI decisions and play a role in facilitating the development of new skills that the human workforce will need as AI and robots take over the less skilled labour positions.

AI on the rise

tay-ai-microsoft“It offers huge potential to enable more efficient and effective business and government but the use of artificial intelligence brings with it important questions about governance, accountability and ethics,” wrote Sir Mark Walport, government chief scientific advisor and Mark Sedwil, Home Office permanent secretary.

“Realising the full potential of artificial intelligence and avoiding possible adverse consequences requires societies to find satisfactory answers to these questions.”

The report highlights that AI has already arrived and is part of everyday life. It cites why potentially hyperbolic, basic AI systems have spread beyond experimental projects and how the likes of the Google Assistant a core part of the search company’s new smartphones.

“Artificial intelligence can help both companies and individual employees to be more productive. Routine administrative and operational jobs can be learned by software agents (‘bots’), which can then prioritise tasks, manage routine interactions with colleagues (or other bots), and plan schedules,” the report said.

“Email software like Google’s Smart Reply can draft messages to respondents based on previous responses to similar messages. Newsrooms are increasingly using machine learning to write sports reports and to draft articles: in the office, similar technology can produce financial reports and executive briefings.”

As such, the report advises that the government needs to get ahead of the positive and negative impacts of the rise of AI.

Privacy concerns

privacy“It is important to recognise that, alongside the huge benefits that artificial intelligence offers, there are potential ethical issues associated with some uses. Many experts feel that government has a role to play in managing and mitigating any risks that might arise,” the authors wrote, noting that two broad areas need consideration.

The first is to develop and understanding of the potential impacts AI can have on individual freedoms such has privacy and consent given smart systems rely on data to operate, including increasing amounts of personal data when delivering tailored services.

There is a concern that AI techniques can infer private information from public data, such as online behaviour of individuals, which leads to privacy being encroached upon and algorithmic bias that can present a risk of stereotyping people, especially in deep learning AI techniques whereby systems are often trained using historical data.

“As an example, imagine a university that uses a machine learning algorithm to assess applications for admission. The historical admissions data that is used to train the algorithm reflects the biases, conscious or unconscious, of these early admissions processes. Biases present in society can be perpetuated in this way, exacerbating unfairness,” the report said.

“To mitigate this risk, technologists should identify biases in their data, and take steps to assess their impact.”

What do you know about privacy? Try our quiz!

 Continues on page 2…