Machine Intelligence: The Value of Today’s AI

Can you outline how Rainbird was founded?

Rainbird Technologies came about as a consequence of a shared experience my co-founder and I had in a previous business. The business was analysing contentious insurer-to-insurer motor claims, as back in 2007 the market was suffering from a high level of contentious claims and even collusion.

What we did in that business was to take a group of people that understood how the system was being defrauded and, built a piece of software that could identify potentially fraudulent claims. It was a very successful business, saving insurers hundreds of millions of pounds.

The software was very challenging to build. The people in my team who coded the application had in effect to become insurance experts themselves. What was revealed as we build the insurance application is that often, something is lost in translation between the expert and the person coding the software. So, when we left that company, we thought we could develop a solution to remedy that problem. And we could see that the translation issue was applicable to a wide range of businesses.

Today’s AI and Machine Learning can trace their heritage to early expert systems. Have the analytical systems businesses can now become possible simply because of the processing power we now have available?

Expert systems are a good analogy for the systems we build. The early expert systems did fail mostly because we didn’t have the processing power to properly express the human expertise we were trying to express. What we wanted to do was connect human knowledge to very large datasets. Experts systems in the past failed because you always needed a mathematical expert who also knew the language of the subject.

With Rainbird, we now have the technology that connects the subject of the expert system with a way of coding that enables the person with the knowledge or expertise to train the system directly without a coder sitting between them. So, there is nothing lost in translation.

Is ensuring outputs from an AI system are accurate, ethical and explainable is vital to get right in order for businesses to trust the AI systems they are using?

With every decision that Rainbird delivers there is what we call an Evidence Tree. This is a rationale that explains the reasoning that went on behind the scenes to arrive at the decision or output the user is seeing. The system also tells you where the data that supports the decision came from, and all the logic that was involved. So, Rainbird is subject matter agnostic, but we do a lot of work within highly regulated sectors such as insurance.

Indeed, explainability is important because even if you don’t have a regulator, you are accountable so you still need to have explainability if these systems are to reach the wide adoption, and deliver the benefits they can – essentially end-users must be able to trust these systems. Here, trust and explainability go together.

Are the huge datasets that many machine learning systems use actually a disadvantage? Would smaller datasets offer more accurate outputs when machine learning algorithms are applied?

The size of the datasets used by these systems can be highly focused and explicit. And it’s true that you can gain explainable outputs from these smaller datasets. However, AI is a very fragmented space. It can mean different things to different people. A lot of promises have been made about this technology with often high expectations.

There are issues when you start to look at Machine Learning, which is in effect, the statistical analysis of very large datasets. Problems can occur with bias and hidden variables that can all impact the outputs. What all these deployments are missing is how important it still is to involve a human with their expertise and their decision making.

What you need to do is capture and encode that human expertise alongside the datasets you have. The result is the ability to then have targeted and focused outcomes that you can explain and that are free from bias.

Is there still a misunderstanding of what AI is and how it can be applied to business processes? And also, the false idea that an AI can replace human workforces?

Augmentation is the key here. This is how technologies like AI should be built and then used. We have seen anxiety like this every time a new technology is introduced – computerisation is a good recent example. We need to ensure when new technologies are used, they are deployed ethically and transparently. The best way to do this is to keep valuing the human capital in a business. AIs should be there to help people do their jobs more efficiently.

How AIs are being applied and the security and privacy consequences of this technology are high in the public’s consciousness thanks in part to GDPR for instance. If we pick facial recognition, often the accuracy statistics, which are based on Machine Learning, are difficult to describe well. When you look at the dataset and its components that arrived at a decision – identifying someone in this example – you might find that the system can identify a female, but females are under-represented in the overall dataset.

The practical consequence of this is that the results you are seeing are not necessarily that accurate. So, when vendors are making claims about the accuracy of their systems, you need to look very closely at what they are basing those claims upon. The Face Recognition Vendor Test is very interesting here. One major vendor misrepresented people of colour. This isn’t surprising, as the overwhelming images in the dataset being used would be white Americans. So, in this context, the results from a system using this dataset should be approached with great care.

Clearly AI technologies will continue to rapidly evolve. How do you see their application expanding over the next few years?

I think when AI systems are used to recommend new music or books to people, that is one thing, but using the technology for face recognition for instance, or to base insurance decisions upon is less certain. The issues for the foreseeable future will be building AI systems that at their core have ethical and explainable outputs, which can stand up in court for instance.

If a company can into the future say they have a clean and unbiased dataset, which they then use with Machine Learning, they will have to prove that claim. And proving that claim is actually much harder than you might think. Bias can easily remain in a variable. It is very, very hard to prove your data is unbiased. So, without a rationale to support your claims, how are you going to get to the point where decisions can stand-up in court if I was making an insurance claim for instance.

David Howell

Dave Howell is a freelance journalist and writer. His work has appeared across the national press and in industry-leading magazines and websites. He specialises in technology and business. Read more about Dave on his website: Nexus Publishing. https://www.nexuspublishing.co.uk.

Recent Posts

TikTok US Sales ‘Hit $16bn’, ByteDance Nears Meta In World Revenues

TikTok reportedly brought in $16bn in US last year, while parent ByteDance made $120bn worldwide,…

15 hours ago

Bankman-Fried Deserves Up To 50 Years In Jail, Prosecutors Say

Ahead of sentencing prosecutors argue ex-FTX boss Sam Bankman Fried deserves up to 50 years…

16 hours ago

Senators Take Up TikTok Bill After Italy Fine Over Harmful Content

Senators consider bill restricting TikTok after rapid House approval, as Italy competition regulator fines company…

16 hours ago

AI Security Company Backtracks On UK Testing Claims

Security company Evolv backtracks on claims UK government tested its controversial AI security scanning systems

17 hours ago

Norfolk County Council Wins $490m Payout From Apple

Apple agrees to $490m settlement of class-action lawsuit led by Norfolk County Council for allegedly…

17 hours ago

McDonald’s International Outage Caused By Third Party

McDonald's says outage affecting thousands of locations across world caused by third-party tech provider carrying…

18 hours ago