Accelerating AI: Managing the Machines

Accelerating AI: Managing the Machines

As the Biden administration creates a national AI task force, how will AI develop over the next few years? And how will these developments be managed to ensure safety and ethics are maintained?

Last month the US government announced the creation of the National Artificial Intelligence Research Resource Task Force. The new entity is the Biden administration’s reaction to the perceived lack of world-leading expertise in developing AI systems.

Often seen as the new arms race, Eric Lander, Science Advisor to the President and OSTP (Office of Science and Technology Policy) Director, stated: “America’s economic prosperity hinges on foundational investments in our technological leadership. The National AI Research Resource will expand access to the resources and tools that fuel AI research and development, opening opportunities for bright minds from across America to pursue the next breakthroughs in science and technology.”

In the UK, an AI development roadmap has existed since the beginning of this year. The report makes a bold claim that by 2030, AI could deliver a 10% increase in GDP if this technology is developed and then harnessed correctly. In their report, UK Research and Innovation (UKRI) conclude: “The UK is well placed to take advantage (of AI). We are ranked third in the world for our research and innovation in AI, remarkable for our size, but only 11th in terms of our ability to realise innovation and impact from AI, a real gap and opportunity.”

It is certainly early days for the development of AI, but it is telling that governments are positioning themselves to support these innovations. Calling this competition an arms race may be provocative. However, the potential that AI has in terms of global commercial advantage can’t be ignored.

Speaking to Silicon UK, Brooks Wallace, VP EMEA at Deep Instinct, says: “AI’s relationship with safety, security and ethics is a highly complex one in which many use cases and scenarios are being debated ahead of them becoming a reality – the ‘What happens when, or if?’ questions. Technology advancements continue to drive transformation in the workplace. This is where lawmakers and trade unions get involved in setting legislation that monitors and controls how AI is used in specific industries and how it impacts individual workers.

Wallace continued: “In many cases there is no absolute right and wrong, it’s a question of your perspective or the lens through which you view the world. The point at which you cede certain tasks to a computer – once a machine can manage an activity, means you can’t close Pandora’s Box. Policing AI is a very tricky topic, as it crosses so many domains – not just legal ones, but also morality, privacy and the balance between citizen security vs citizen freedom. Plus, I don’t see policing as being exclusively the responsibility of law enforcement bodies. Ethics and data privacy oversight bodies also have a role to play, for example.”

How AI develops will be multifaceted. As the industry expands and touches more businesses, ethics, governance, explainability and management all come into play.

Accelerating AI development

Silicon UK spoke with Martha Bennett, VP and Principal Analyst, Forrester.

Martha serves CIOs and other tech leaders, helping them understand the impact of emerging technologies on their business. She also provides best practice guidance on how to assess and introduce new and emerging technologies. Martha provides in-depth coverage of blockchain technology and digital assets and analytics and artificial intelligence at a strategic level. She has also started a new research stream, looking at types of non-IT innovations that are necessary to get the most out of new and emerging technologies.

Setting the agenda

The speed at which AI is developing is clearly a driver behind governments paying more attention to AI and what this technology means for their countries and citizens. “The primary challenge is that the technology will change far faster than the regulation will ever be able to keep up with,” says Eric Tyree, Head of Research and AI, Blue Prism. “I see the problem not being with the technology itself, but with its application and bias. This means it would be better to regulate activity than the technology. For example, in the US, there already are strong regulations protecting people from bias in lending. It does not matter if the bias is human, data, or AI based, there can be serious consequences if there is bias found in an operation’s lending process.”

What the US is doing may grab the headlines, but as Saar Yoskovitz, CEO at Augury, told Silicon UK, Europe needs to follow suit fast: “A unified AI task force for Europe would provide much-needed access to resources and tools needed to aid the adoption of AI across all industries. Although awareness of AI technology among European companies is growing, only 18% of enterprises have plans to adopt AI in the next two years, while 40% of the enterprises in Europe do not use AI or have plans to in the future.

“Getting value from AI at scale requires much more than technology. McKinsey’s 2020 state of AI survey looked at companys’ AI practices in six areas: strategy; talent and leadership; ways of working; models, tools, and technology; data; and adoption. The companies that derived the most value from AI were more likely to engage in best practices in all of these areas than those seeing less value from AI. AI high performers were also more likely than others to recognise and mitigate most AI risks. A comprehensive approach prevents companies from getting stuck in the pilot purgatory stage and enables them to derive significant value from AI systems.”

Developing word-class AI-based services and products has massive potential. However, the risks of unregulated innovation can’t be ignored. UKRI concludes: “At this juncture the UK has a key opportunity to maintain and build its international position in AI, and to realise the significant potential impacts of AI on society, the economy, and the environment. Taking this opportunity will require world-leading research and innovation.”

With Blue Prism’s Eric Tyree also advising: “The best thing government can do is fertilise the soil for AI development and to use government procurement to encourage AI. Government departments must properly digitise and leverage AI for the public good and use the procurement programs that support this to foster the AI economy. Similarly, in its origins, Silicon Valley may have been the product of technicians and entrepreneurs, but the US government defence procurement provided Silicon Valley with its ‘seed and series A’ funding.”

The creation of governing bodies for AI innovation has several stakeholders to pay attention to as enterprises move forward with their developments. A partnership with these companies that places security, ethics and the citizen using the AI system will always balance commercial need and compliance. However, it remains to be seen whether the fledgling taskforces and innovation hubs will keep that balance to deliver AI systems that are beneficial to all.

Silicon in Focus

Peter van der Putten, Director of Decisioning and AI Solutions at Pegasystems.

Peter van der Putten is assistant professor of AI, Leiden University and Director of Decisioning at Pegasystems. He is particularly interested in how intelligence can evolve through learning, in man or machines. Peter has a MSc in Cognitive Artificial Intelligence from Utrecht University and a PhD in data mining from Leiden University and combines academic research with applying these technologies in business.  He teaches New Media New Technology and supervises MSc thesis projects.

Peter van der Putten, Director of Decisioning and AI Solutions at Pegasystems
Peter van der Putten, Director of Decisioning and AI Solutions at Pegasystems.

As the US creates National Artificial Intelligence Research Resource Task Force, does Europe need a similar body to oversee the development of AI technologies?

“Europe already has multiple organisations that focus on AI from a research perspective, but more organised from a bottom-up perspective,” Putten responded. “CLAIRE and the ELLIS Society are networks of research labs spread across Europe and pushing the AI agenda. The EU and the non-EU European countries have national and international programs, as well as the Council of Europe. Of course, by definition, it is more of a challenge to drive AI policy and investment from a single body in Europe, so a more networked approach is required.

“The European flavour of this AI policy is all about stimulating a human approach to AI by emphasising that AI needs to be worthy of our trust. Ultimately, global cooperation is what is needed. At the recent G7 summit, the EU, the US and other G7 members announced specifically to further collaborate on AI. In addition to announcing plans to fight the global COVID-19 pandemic, address climate and gender issues, and, of course, stimulate EU, US and G7 trade and investments, the group intends to collaborate on digital technologies with potential high impact on the economy and society and called out to align and develop standards for artificial intelligence (AI), to as von der Leyen put it ‘promote a human-centric artificial intelligence’. This will be overseen by a newly established EU, US trade and technology council.

 What are the challenges faced by any oversight organisation within what is a rapidly expanding AI tech sector?

 “Policy documents on AI policy are plentiful, and there is much convergence of opinion on what the fundamental principles are to apply to AI to ensure we can reap the benefits and ensure that it is also worthy of our trust. Examples are robustness, safety, transparency, fairness, and accountability. But the issue is how to operationalise these principles into tangible guidelines and methods.

 “Also, the needle is shifting from self-regulation and discussion on ethical principles to proposed laws. The emphasis then will be on high-risk systems and decisions based on the potential risk to harm. But there is also the issue of how to define what constitutes a high-risk system.

 “AI technology keeps developing as well. For example, traditionally, in AI, data scientists were handcrafting models; now, the industry is moving to large scale automated learning systems, where models are created and trained on the fly and continuously adapted. This leads to an explosion of AI assets. As a result, one-off evaluation and assessments of AI assets are no longer sufficient; relatively continuous monitoring of these assets in terms of fairness and other ethical measures is required.”

 In the rush to adopt AI across businesses, is enough attention being paid to safety, security and ethics? Can there ever be effectively policed?

 “The latest proposed regulations, such as the recently proposed EU regulation, takes an outcome, risk-based approach. This means that attention is focused on the purpose of what the AI system is used for, and the potential risk for inflicting harm. Also, the more advanced providers and users of AI are actively developing and applying concrete methods to measure certain aspects, such as whether bias in automated decisions is within bounds. So rather than using a blanket approach, it is smart to focus attention on where damage can be done and being very practical in terms of how ethical principles can be operationalised.”

 As vendors race to secure their part of the commercial AI market, how can these often proprietary systems be governed or given oversight by any governing body?

 “A popular frame is that commercial AI vendors use proprietary AI technology, hence it is evil. But this view is not correct. First, commercial AI vendors’ methods are typically widely used, well known, and not different from methods used in academic research or open-source systems. But more fundamentally, AI technology is fairly generic so that the same algorithms can be used for both good or harmful purposes and in good or bad ways. It doesn’t matter whether the underlying technology is commercial or not.

 “Therefore, it is not the general AI technology that you should police, but specific applications, decisions or models. It starts with identifying the risk based on the purpose of what the AI is used for, and then operationalise evaluation metrics such as fairness and methods such as bias detection. This can be assessed by simulating decisions made by these systems, without having to know the fine details of the logic and models that is driving these decisions.”

 How do you think the proposed EU Artificial Intelligence Act will generally influence AI development, and how oversight and governance will also evolve?

 “The starting point of the proposed EU regulation is good. It directs focus to high-risk systems, it doesn’t differentiate between classical AI and statistics or more modern machine learning, instead, it focuses on outcomes. Many of the finer details has been left open, but that may be intentional, leaving it to various stakeholders to agree on the operationalisation of the regulation and maximise both reaping AI benefits and protecting trust. Whilst it will take time for the dust to settle on all of this, the EU clearly takes the first move, which sets the bar for other global players.”


Photo by Tara Winstead from Pexels