Fujitsu CTO: Tech Industry Needs To Take Responsibility For Impact Of AI

The technology industry needs to take responsibility in the development of ethics and checks and balanced for artificial intelligence (AI) systems to prevent any negative impacts from smart systems, says Fujitsu’s chief technology officer Dr Joseph Reger.

When asked by TechWeekEurope in an interview at Fujitsu Forum 2016 in Munich if the technology industry should feel obliged to address concerns that AIs will leave some people without jobs, Reger replied it certainly should because “we are creating [AI]”.

”We’ve got to start discussions about the consequences,” he said, highlighting that AI will significantly disrupt the job market forcing all manner of sectors and blue and white collar workers to need to adapt to changes if they wish to keep up with the change.

“There is a clear disruption to the job market and the only way to respond to that as an individual is to reskill, and as a society to create a framework for that to be possible.”

He added there is a need in particular to look at what children are taught in order to give them the skills for the future.

AI growing pains

Concerns about the impact of AI and other digital technology on current and future jobs is nothing new. But there is some confusion as to how to assess and address the situation, with some championing the need for government to put AI and its potential disruption in the spotlight.

Reger noted Fujitsu is already taking such a responsible approach to AI, which it is presenting to businesses as a means to carry out digital transformation, with its ‘human-centric AI’, an approach to AI technologies that involves creating systems that are complementary to people’s lives rather than looking at completely replacing them in jobs and duties.

“Human-centric AI is the natural next step and that’s what we’ll do and the question is whether society will do that or not,” he added.

Breaking bias

However, creating such AI systems needs a careful approach as the dynamic nature of the code contained in machine learning algorithms and deep learning-based AI systems makes it difficult to see exactly what prompted an AI to come to a certain decision or take a particular action.

This means that is can be difficult to spot unwanted and detrimental biases that AIs could potentially create through parsing masses of data without any human interference at the code level. And Reger highlighted that this throws up problems of establishing who is accountable for the decisions an autonomous AI system makes.

“There is concern about this bias that is not built in it is just generated; there is concern about the accountability of this stuff, because the systems will make decisions [but] who will they make accountable for it.” he said.

Unfortunately, there appears to be no easy answer to this problem, though according to Reger AIs are created and taught with a form of moral and ethical frameworks then such issues could be avoided somewhat.

“There isn’t much you can do outside what we normally do as mankind in that while we are raising children we create moral and ethical frameworks,” he explained.

“There is no guarantee that every participant will abide by that but if that happens and we have regulations and laws and judicial systems and son on, by and large we are going to be ok,”

“We’ve got to create these checks and balances; it has to come to moral and ethical frameworks for AI systems.”

Such measures will need to be established sooner than later, as the technology industry is pursuing AI with an almost heady abandon, from adding smart virtual assistants into smartphones to Microsoft looking at turning its Azure cloud into an AI platform.

How much do you know about the world’s technology leaders? Take our quiz!

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

Apple Pulls WhatsApp, Threads From China App Store

Beijing orders Apple to pull Meta's WhatsApp and Threads from its Chinese App Store over…

52 mins ago

Intel Foundry Assembles Next Gen Chip Machine From ASML

Key milestone sees Intel Foundry assemble ASML's new “High NA EUV” lithography tool, to begin…

5 hours ago

Creating Deepfake Porn Without Consent To Become A Crime

People who create sexually explicit ‘deepfakes’ of adults will face prosecution under a new law…

1 day ago

Google Fires 28 Staff Over Israel Protest, Undertakes More Layoffs

Protest at cloud contract with Israel results in staff firings, in addition to layoffs of…

1 day ago

Russia Already Meddling In US Election, Microsoft Warns

Microsoft warns of Russian influence campaigns have begun targetting upcoming US election, albeit at a…

1 day ago