Trust, Privacy, Ethics and AI

Paul Knight, Partner, Mills & Reeve

Paul Knight is a Partner at the national law firm Mills & Reeve. Knight specialises in issues of privacy, user data regulation, and its intersection with new technology.

Paul Knight, Partner, Mills & Reeve. Knight specialises in technology and commercial law, including data protection. Amongst other things, he advises on commercial contracts, software licensing, e-commerce and GDPR compliance. Knight works with clients from different sectors to negotiate contract terms, to help quantify risks and to provide realistic solutions to protect his clients.

What can businesses learn from the history of how AI and how this has been used within enterprises?

Trust in technology is essential to its success. Big data and AI-based decision making can offer powerful benefits to individuals, and we are already enjoying faster and more accurate processing of insurance claims, credit card or loan applications and questions for our virtual assistants.

These benefits can only be accessed, though, with proper clarity from businesses about the contexts in which AI technologies are being deployed. In particular, AI technology requires the collection and processing of vast amounts of user information: our shopping habits, driving behaviour, facial characteristics, medical history and more, and all in ways that we cannot fully understand.

The algorithms are so complex. In fact, the processing is not always fully understood by the experts themselves. And there is sometimes an underlying fear that AI will be used to replace those staff in the long run. So, businesses should take into account that the success or failure of AI technologies is predominantly dictated by people and culture, rather than the technology itself.

Within the legal services sector, one of AI’s big precedents has been document comparison software. Once upon a time, comparing documents usually required a trainee solicitor to sit down with the two documents, do a line by line comparison and highlight any differences between the two versions. The software can do this in a matter of seconds, whilst also avoiding human error and the use of the valuable capacity for junior staff.

AI solutions are just the next step-up. More than simply comparing documents, one of AI’s best applications is the review of standard leases. Comparing the content against metrics written by the lawyers, it can then extract the relevant data and write the output into a fresh spreadsheet, ready for the next stage of the legal due diligence process. The benefits of both technologies are obvious and should put to rest some of the alarmism we see about new innovation. In this sense, often the best way of looking forwards is to look backwards, too.

When professional services firms use AI to provide advice to their clients, the default position is that the professional services firm – not the AI provider – will be liable if there are any inaccuracies or omissions. Consequently, professional services firms have to be very confident that the algorithms on which the AI is based are reliable – they have to be able to trust the technology, and this means plenty of testing before it’s put into live use!

Can you outline some great current uses of AI? Who is using to the best effect and why?

One of the organisations we work with, The Alan Turing Institute, has even been using AI to help responses to natural disasters. Amidst the devastation of hurricane Dorian in the Bahamas, for example, AI is used to quickly label satellite before-and-after images to identify peak damage sites and which transport systems can – or can’t – be used by response teams. The use of AI can, therefore, make the disaster response more efficient and effective.

Which key areas of businesses are being impacted by AI?

The areas of businesses currently most impacted by AI tend to involve high volumes of data. AI is not yet developed enough for active imagination, so its immediate application lies mainly in intense pattern recognition. Email spam filters, for example, generally make use of AI technology, with large data sets being used to build statistical models to shape the content we receive. These data solutions are particularly helpful in the legal sector, where finding trends and exceptions are key for due diligence on leases and standard-format contracts.

What are the risks and pitfalls businesses need to avoid as they use AI across their enterprises?

We’ve all heard about Europe’s sweeping privacy law reform, the GDPR. Other countries are following suit, with new and tighter privacy laws now looking imminent worldwide. Staying compliant with privacy regulation has, therefore, become a priority for boardrooms.

Among the key elements of the GDPR are transparency and setting limits on processing. But the task of explaining how data will be collected and used for AI-based applications is proving a real challenge for regulators. Placing appropriate limits on this processing is also equally challenging. The duration and scope of processing should be defined from the outset, and it’s not something that sits naturally with providing machines with the freedom to use oceans of data and generate value.

Businesses and industry leaders will not be helpless on this, however. The UK’s privacy watchdog, the ICO, for example, is working alongside The Alan Turing Institute on Project exlpAIn to find ways to improve public understanding of AI processing. It is always worth remembering that regulation must always progress with public knowledge and acceptance high on the agenda. Compliance should not be a box-ticking exercise, but an active process of engagement with the public and the GDPR’s principles-based approach.

Indeed, if companies aren’t clear with their users on data, fines aren’t the only penalty they should worry about: breaking the guidelines can create big hits to public goodwill and reputation, and with tangible effect to their financials.

The EU’s High-Level Expert Group on AI has also produced a set of seven ethics guidelines to inform and shape future AI development. These are:

  • Human agency and oversight.
  • Technical robustness and safety.
  • Privacy and data governance.
  • Transparency.
  • Diversity, non-discrimination and fairness.
  • Societal and environmental well-being.
  • Accountability.

What’s the future of AI look like?

The concept of trustworthy AI, as explained by the EU’s High-Level Expert Group on AI, should be our priority in the future of AI. As well as avoiding AI’s big economic and social risks, Trustworthy AI could make an active impact on some of the world’s biggest challenges.

In realising sustainable infrastructure and battling climate change, for example, AI can be used to optimise transport arrangements and the operation of energy-efficient engines. For health and wellbeing, AI can help to identify symptoms and detect diseases, accelerate the development of medicines and offer more targeted treatments.

As a final case in point, ethical AI may even prove invaluable in future education. In creating personalised and adaptable education programmes for each individual, AI can overcome the costs of blanket policy and help everyone to acquire new skills.

Put simply, ethical use should be at the front of our minds in any future pursuit of AI technology.