AI Needs Ethics Code, Says House Of Lords

Tom Jowitt is a leading British tech freelance and long standing contributor to TechWeek Europe

UK is uniquely positioned to shape the development of artificial intelligence, but code of ethics is needed so it benefits mankind

The House of Lords has published a comprehensive report into artificial intelligence (AI) and is calling for an AI code of ethics.

The “AI in the UK: ready, willing and able?” report also said that the UK is in a “unique position” to help shape the development of AI, to ensure the tech is only applied for the benefit of mankind.

The report comes after 10 months of consultation by the 13-member Committee, which was tasked with assessing the economic and social impact of artificial intelligence in July 2017. It interviewed companies such as DeepMind and Microsoft to help gain insight into the area.

UK leadership?

The arrival of AI has perhaps been one of the most vexing tech developments for politicians and tech experts in recent years, after some including the late Stephen Hawking warned of the dangers the technology could present.

The House of Lords said that its inquiry has concluded that the UK is in a strong position to be among the world leaders in the development of artificial intelligence during the twenty-first century.

“Britain contains leading AI companies, a dynamic academic research culture, a vigorous start-up ecosystem and a constellation of legal, ethical, financial and linguistic strengths located in close proximity to each other,” said the report. “Artificial intelligence, handled carefully, could be a great opportunity for the British economy. In addition, AI presents a significant opportunity to solve complex problems and potentially improve productivity, which the UK is right to embrace. Our recommendations are designed to support the Government and the UK in realising the potential of AI for our society and our economy, and to protect society from potential threats and risks.”

The report recommended the creation of a growth fund for UK SMEs working with AI to help them scale their businesses; a PhD matching scheme with the costs shared between the private sector; and the standardisation of mechanisms for spinning out AI start-ups from the research being done within UK universities.

AI ethics

And the report called for the creation of a code of ethics for AI.

The UK must seek to actively shape AI’s development and utilisation, or risk passively acquiescing to its many likely consequences,” said the report. “We propose five principles that could become the basis for a shared ethical AI framework.”

“While AI-specific regulation is not appropriate at this stage, such a framework provides clarity in the short term, and could underpin regulation, should it prove to be necessary, in the future,” it said. “Existing regulators are best placed to regulate AI in their respective sectors. They must be provided with adequate resources and powers to do so.”

“By establishing these principles, the UK can lead by example in the international community,” the report stated.

“There is an opportunity for the UK to shape the development and use of AI worldwide, and we recommend that the Government work with Government-sponsored AI organisations in other leading AI countries to convene a global summit to establish international norms for the design, development, regulation and deployment of artificial intelligence,” it said.

Expert Reaction

The report was welcome by a number of tech experts.

“The House of Lords Artificial Intelligence Committee’s report – AI in the UK: Ready, Willing and Able contributes to the debate surrounding the issue of AI and ethics that has been happening for decades,” said Matt Walmsley, EMEA Director at Vectra.

“To influence and control an AI instance, its creator can program an ‘operating principle’ to replicate a chosen moral framework,” he said. “his has potential to limit or stop the AI from what is considered ‘doing harm’. Human moral frameworks are, however, dynamic. This raises the question of who should choose and revise these ‘morals’ – should it be the user, the AI’s creator, the government or another legislative body?”

“Our tendency to anthropomorphise AI technology perhaps comes from the widespread influence of science fiction,” he added. “AI in today’s workplace is more ‘Robocop’ than ‘Terminator’s SkyNet’. It augments human capabilities so that systems can operate at speeds and scales that humans alone cannot. In this context, moral risk is extremely low.”

Another expert also questioned who will prevent bias in machine learning.

“This is a good start,” said Brandon Purcell, principal analyst at Forrester, speaking about the report.

“On the plus side, the report shows that the House of Lords believes in the inevitable proliferation of AI and is considering its many ethical implications. Their awareness of AI’s propensity to learn and reinforce the ‘prejudices of the past’ is encouraging,” he said.

“However the devil is in the details – or in this case, the data,” said Purcell. “To prevent bias in machine learning, you must understand how bias infiltrates machine learning models. And politicians are not data scientists, who will be on the front lines fighting against algorithmic bias. And data scientists are not ethicists, who will help companies decide what values to instill in artificially intelligent systems.”

“At the end of the day, machine learning excels at detecting and exploiting differences between people,” he said. “Companies will need to refresh their own core values to determine when differentiated treatment is helpful, and when it is harmful.”

Another expert wanted to highlight the job creating ability of AI.

“AI shaping the future of our workforce,” said Jan Mueller, global VP at Korn Ferry. “Given the speed at which researchers advance AI technology, many studies, as well as prominent figures in the industry – think Elon Musk – predict AI will overtake human jobs.”

“But, as this latest parliamentary report shows, many jobs will, in fact, be enhanced by AI,” said Mueller. “Although many jobs will disappear, a whole new wave will be created meaning we will see increasing demands for new skill sets in virtually every job and profession.”

Another expert said that AI experts need to work with UK businesses.

“It is important that AI systems are introduced in a careful and ethical way,” said Doron Youngerwood, product marketing manager, artificial intelligence at Amdocs. “Where they lack in-house experience, UK businesses should work with AI expert partners which can outline definable ROI and clear use cases using today’s proven technology.”

“These experts can help upskill staff on the nuances of the AI technology they are using, whilst ensuring they understand the consequences of using it erroneously,” he said. “There are understandably some concerns that AI automation will lead to job losses. However, this is likely to be balanced out by the creation of new jobs, with expertise in the field needed to ensure the technology is developed and integrated effectively.”

It should be remembered that while the UK believes it is in a good position to develop the future of AI, there are other countries (most notably China and the US) also making big strides here.

And France is also seeking to be a leading AI developer. Samsung for example recently announced that it will build a massive AI research centre in that country.

Put your knowledge of artificial intelligence (AI) to the test. Try our quiz!