M2M: The Future of Cybersecurity

M2M: The Future of Cybersecurity

The focus across much of the cybersecurity landscape has been to mitigate human attacks on networks. However, what about machines attacking machines? As automation expands, many of the systems we take for granted will have an M2M component, how do we secure these channels?

As the IoT and IIoT ecosystems expand, will security concerns also increase? When machines have more autonomy and communicate between themselves, how does the M2M space deliver their services in a secure environment? How advanced is M2M security today?

There is no doubt that the IoT space is expanding. Indeed, according to the current IoT Barometer from Vodafone, over a third (34%) of companies are now using IoT, up from 29% in the previous IoT Barometer. Regionally, the Americas saw the most significant increase — rising from 27% to 40%. The industries that saw the greatest growth were transport and logistics (27% to 42%) and manufacturing and industrials (30% to 39%).

M2M communications have become commonplace. As part of the sensor and control landscape, M2M connections have been expanding. Has this expansion heightened security concerns?

According to Forecourt’s State of IoT Security 2020 report “The riskiest device groups from our Device Cloud data include smart buildings, medical devices, networking equipment and VoIP phones. IoT devices, which can be hard to monitor and control, exist in every vertical and can present risk to modern organizations, both as entry points into vulnerable networks or as final targets of specialized malware. The device types posing the highest level of risk are those within physical access control systems.”

Also, Alexandra Rehak, Internet of Things practice head at Omdia, says: “Enterprises and providers must work together to prioritize and support IoT security requirements. Providers need to make sure IoT security solutions are simple and can be easily understood and integrated. Given how high a priority this is for enterprise end users, providers also need to do more to educate customers as well as providing technology solutions, to help ensure IoT security is not a barrier for adoption.”

The marriage of IoT and M2M must be a coupling that is safe and secure. As more communications rely upon these systems, they must have robust security in place. As our built environment becomes more autonomous, ensuring the services we all use are safe is paramount.

Machine communications

As many network core services become more automated, how their security components are developed must also consider that humans created those protocols, as Jake Moore, Cybersecurity Specialist at ESET explained to Silicon UK: “Before true M2M security comes into play, we really need to look at the data that is input into the machine. Humans are naturally prone to having a bias, and the data in M2M technologies still largely comes from humans. Authentic artificial intelligence behind the machine’s core thinking process still has a way to go before it is purely independent. So until then, we are still working with large amounts of data which have had human input.”

Moore continued: “These processes can allow human interaction to override them, which is a necessary part of the M2M model. Although it takes a large proportion of the work away from the employee, it must continually monitored, updated and fed more data. Furthermore, some M2M technologies haven’t been designed with security in mind from the outset. Many of such technologies have been designed to run across a secure network without the thought of going over the internet, which can cause extra risks.”

Allowing machines to manage systems with little or no human oversight can be prone to potential dangers. Ben Goodman, SVP of Global Business and Corporate Development at ForgeRock explained: “When two machines interact, the transactional risk is similar to a transaction between two humans, and the main risk is that one of the machines is a malicious actor.

“Your primary security objectives should, therefore, be to properly identify the actor on each side of the transaction and to check that they have the access rights to talk to each other – just as you would if it were a transaction involving a human. The added danger in an M2M context is that when there’s not a human involved, there’s no-one to notice if something’s going wrong. This heightens the need to ensure that every device in an M2M environment has definitive identities that are unique and verifiable.”

And for SRI International’s Principal Computer Scientist, Dr Karim Eldefrawy, when it comes to developing comprehensive security for automated systems, there’s no time to waste.

Dr. Karim Eldefrawy, a Principal Computer Scientist at SRI International.
Dr. Karim Eldefrawy, a Principal Computer Scientist at SRI International.

“Automation will keep increasing, and there’s no point in fighting it, we should try to get it right. There has always been (so far) “snake oil” in cybersecurity, and it is very hard sometimes to tell what works based on data and statistics.

“There is no silver bullet when it comes to cybersecurity,” continued Eldefrawy. “But also neglecting cybersecurity (and potentially in the future what will be called “personal biosecurity” which may be monitored by sensors and uses online databases and platforms, e.g., contact tracing) is not an option anymore. We need a new paradigm to think about cybersecurity, and we need to believe the long-term, smart cities, and smart infrastructure are being built to stay, and will be the backbone, and organize societies for a while, this can get very messy, very quickly.”

The human element

Machines will increasingly communicate with each other. The human component, must, however, not be forgotten, as Phil Skipper, Head of strategy, Vodafone Business IoT explained to Silicon UK:“ The human element comes into play when a user engages with a machine (e.g. an automated petrol pump) when one acts to protect and monitor the network and finally when a person interacts with the network to either configure devices or consume the data generated.

Phil Skipper, Head of strategy, Vodafone Business IoT.
Phil Skipper, Head of strategy, Vodafone Business IoT.

“This is a very broad environment and is secured in several ways. Firstly, through the end-point devices. This typically uses two-factor authentication, biometrics or other tools to ensure that the user has the permission to access the device and wider network services.

“The second way is securing the behaviour. It’s done by having a number of robust pathways that can be navigated – each pathway generates a trail, and this trail can be used to see if the interaction is deviating from the expected normal. This can then be used to increase security levels to validate the interaction further or terminate it.

“Lastly, the environment can be secured at the point when the transaction comes back into the human domain. This provides the opportunity for devices to reprogram and the data to be distributed. In these business environments, there are many tools that are available to help secure access and define role-based activities to secure human interaction.”

M2M and IoT are close bedfellows and require different approaches to their overall security protocols. The autonomy that M2M systems can deliver is highly attractive to businesses. Witness the rise of automation across many industries and sectors. The autonomy that M2M systems offer as a core component of their value proposition will increasingly be implemented. These deployments must be achieved with security front and centre.

Silicon in Focus

Camilla Winlo, Director of Consultancy Services at DQM GRC.

Camilla Winlo, Director of Consultancy at DQM GRC
Camilla Winlo, Director of Consultancy at DQM GRC.

When machine talk to machines with no human intervention, what are the security implications?

In 2017, Facebook set two chatbots talking to each other, but had to abandon the experiment when they started to evolve their use of English language in a way that made it easier for them to work together – but unintelligible to the people monitoring them. The chatbots were tasked with negotiating with each other and seemed to do so successfully, but the negotiations were meaningless to humans. It went like this:

Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me

And so on.

Suppose M2M automation involves two machines learning to communicate with each other. In that case, similar effects could occur – communications could become increasingly meaningful to the machines and increasingly meaningless to the humans overseeing them.

The less a human can understand what the machines are doing and why the more difficult it may become to spot security incidents. For example, it would be much harder to detect if an attacker had corrupted the sensor input data if the information resulting from that sensor is meaningless to the human observer. This can have implications for all the aspects of information security – confidentiality, integrity and availability.

There is also a big issue around accountability – where two machines are making automated decisions in negotiation with each other, who is responsible? The producer of one or both of the machines? The person who approves the algorithm governing the processing? The entity providing the input data? The entity acting upon the output? Assigning responsibilities in a meaningful way, and then assuring compliance with those responsibilities, can become very complicated.

Oversight is a massive challenge in this field, along with the issues of intelligibility around machine learning algorithms, and in addition to the assignment of liabilities and responsibilities. All of this coupled together means there is a distinct lack of real competence available. It takes time and broad exposure to develop real expertise in any field, and this is, by definition difficult to achieve in an emerging field. Can organizations find, recognize and effectively manage these scarce individuals? Can data controllers really exert control?

Incident response can be an issue too. Answering the most basic questions about ‘what has happened, why, how do we fix it and what do we do in the meantime?’ can be a real challenge where the incident relates to machine-to-machine automated communication.

This is why data protection laws like the GDPR insist that organizations using automated decision-making can explain how their algorithms work – and the ICO recommends doing this at two levels: one for general users, and a second technical explanation for experts. This means organizations should offer meaningful human oversight from a person who can understand and review the decision and change it if appropriate.

IoT and M2M are associated technologies. Do we have integrated security protocols to protect these networks when they come into contact with human users?

The best-known privacy engineering and security standards for IoT devices are probably those published by NIST and ENISA. NISTIR 8200 lists the security objectives for IoT devices, and NISTIR 8228 sets out the considerations for managing IoT cybersecurity and privacy risks. ENISA has set out risk assessments and baseline security recommendations for IoT devices.

However, IoT is a fragmented world, and there are comparatively few security protocols in general – not just for the human/machine interface. It’s not uncommon for information security teams to have to manage an ecosystem with a plethora of communications protocols and multiple – or missing – authentication protocols. It can also be hard to patch devices when vulnerabilities are identified.

From a data protection perspective, the critical protection requirement here is to ensure that humans can’t interfere with networks like these in a way that causes privacy risks. Organizations must carry out Data Protection Impact Assessments to establish what the chances are and what technical and organizational measures will be needed to control them.

Standards such as those NISTIR 8200 and NISTIR 8228 cover technical measures that are designed to thwart malevolent humans up to a certain skill level and to prevent foreseeable human errors. Protection beyond this point requires organizational measures, such as training and monitoring. Organizational measures do not typically have published protocols in the way that technical measures do; however, there are some established best practices such as providing induction training, role-based training and refresher training at least annually.

The bottom line, though, is that humans do some very peculiar and unexpected things. We may never know what made cybersecurity professor Hector Marco decide it would be a good idea to use a common penetration testing technique on an inflight entertainment system. At the same time, the plane was airborne – but it is probably as much luck as it is a judgement that he only froze his own screen and not the whole system.

Automation and AI also impact how network communication is being built. Add M2M systems, is their potential to lose control of access and security?

Probably the best known IoT malware is Mirai [https://en.wikipedia.org/wiki/Mirai_(malware)], which turns networked devices that use Linux into bots. The first Mirai botnet was found as far back as August 2016 and it still a significant threat today.

IoT devices have some particular vulnerabilities that make them more susceptible to attacks. They typically have large attack surfaces with many vulnerability points and limited device resources for security controls. They are often made cheaply and sold to unsophisticated users who may not understand the importance of changing default passwords – and may not even have the ability to do so. This is how Mirai works – it scans for IP addresses of IoT devices and then attempts to log in using a short table of usernames and passwords.

We have seen a casino attacked via its fish tank, so there is undoubtedly a risk that using vulnerable devices in complex systems can cause access and security issues.

From a data protection point of view, the challenge is that there is often a desire to use technology because it’s there, or because it will improve the user experience at the surface. However, under the bonnet, the user experience may be significantly worse. Automated decision making that cannot be fully explained breaches the GDPR because of the risk of unfair outcomes – the reason why exams have had to be remarked this summer is because of an unfair algorithm that was designed in a way that breached the GDPR.

Complex and vulnerable ecosystems risk breaching individuals’ right to confidentiality and integrity. In conjunction with often extensive use of sensors, this can have real-world consequences for people’s physical safety as well as other harms such as financial loss, reputational damage, discrimination and distress.

When M2M deployments are considered in high-risk sectors such as healthcare, what safeguards are in place to ensure these systems can’t be compromised?

The first defence should be the Data Protection Impact Assessment. This assesses all the harms of the specific project and requires organizations to mitigate any high risks before processing begins. If the organization cannot find a way to mitigate all the high risks, then they must seek prior consultation with the data protection authority before processing can begin. Simply put, processing with uncontrolled high risks is unlawful.

We are actively developing smart cars, smart buildings and smart homes. Are we also developing smart security to protect M2M networks?

This is a new field, and information security professionals’ knowledge is improving all the time, but there is still some distance to travel before IoT and M2M networks are genuinely secure. Smart cars, buildings and homes will largely be bought and maintained by non-experts who will rely on the manufacturers and developers getting smart about security. That’s why the Privacy by Design requirements in the GDPR is so important.

What does the future of M2M security look like?

Because the GDPR prevents deployments with outstanding high risks to individuals, M2M security has to improve, or there is simply no future for the industry. That means manufacturers and developers need to get better at understanding and mitigating risks – and the end-users need to be prepared to pay a bit more for safer equipment. In the end, it’s a false economy to do anything else – the government, courts and regulators both have the power to stop unlawful processing. And we have seen the havoc that can cause with the last-minute demand to remark exam papers.