Key insights for technologists navigating on-premises and hybrid infrastructures

What is a Brand Discovery ?

In recent years, the spotlight in the IT landscape has been on cloud-native technologies. The industry’s discourse has revolved around the way businesses employ no code and low code platforms to accelerate release velocity and deliver on digital transformation goals.

However, amidst this fervor, it’s essential to remember that a significant portion of organisations to deploy a large part of their IT estate on-premises. Despite the enthusiasm around contemporary application frameworks, a considerable segment of IT teams dedicates the majority of their work hours on building, overseeing, and refining their applications.

Therefore, to ensure these essential roles aren’t overlooked, this article presents three crucial aspects for those tech professionals tasked with the management and optimisation of mission-critical applications operating in on-premises and hybrid settings.

  • On-premises computing is a mainstay

You might not think so from reading headlines but the reality is that numerous organisations will continue to deploy on-premises technologies for many years to come. Of course, the shift to cloud computing will continue to accelerate (and attract the limelight) but, as many organisations are now realising, cloud migration takes time. And it involves significant investment, something many businesses aren’t prepared to take on in the current economic climate. Already, we’re seeing many IT leaders re-evaluating their cloud strategies as cloud costs rise.

It’s also worth remembering that there are some industries where wide scale migration to cloud native technologies just isn’t an option. For example, the public sector where there are huge security and privacy factors at play, due to the highly sensitive nature of the data that they manage. Federal governments must adhere to strict requirements to operate air-gapped environments, with no access to the internet, and there are similar regulations for state and regional government agencies, as well as healthcare organisations. These requirements make it almost impossible to move to a public cloud environment.

But it’s not just the public sector that is contending with this sort of situation. Financial services institutions must comply with tight data sovereignty rules which dictate that customer data remains within national borders. Organisations can’t afford the slightest slip up – otherwise they face big fines and severe reputational damage.

While some IT leaders may wish they could move more of their IT estate into cloud native environments, but are restricted from doing so, there are other instances where it just makes more sense for organisations to keep elements of their IT on-premises.

We work with a number of major global brands that choose not to place their data in the cloud because of the huge volumes of sensitive intellectual property (IP) they own. They’re not prepared to take the risk (however small) of storing this IP outside their organisation. IT leaders want to retain the control that on-premises computing provides in comparison to cloud. They want total visibility on where their data resides, and they want to handle their own upgrades within their own four walls.

Evidently then, while cloud native technologies are perceived to be more exciting, there will continue to be a need for some business-critical applications to remain on-premises, for a long time to come.

  • IT teams need unified visibility across on-premises and cloud environments

With this in mind, technologists need to ensure they’re able to manage and optimise on-premises applications and supporting infrastructure in order to deliver seamless digital experiences at all times. And in a growing number of cases, they need to monitor applications within a hybrid environment, where application components are running across both legacy and public cloud environments.

IT teams need real-time visibility into IT availability and performance up and down the IT stack, from customer-facing applications to core infrastructure. This allows them to quickly identify causes and locations of incidents and sub-performance, rather than being reactive, having to spend large amounts of time trying to understand an issue.

Critically, technologists need to connect IT data with real-time business metrics so they can quickly identify the most serious issues which could really impact end user experience.

Increasingly, as organisations move to hybrid environments, IT teams need unified visibility across their entire IT estate. However, many IT departments are still deploying separate tools to monitor cloud and on-premises applications, so they can’t generate a clear line of sight of the entire application path across hybrid environments. They’re having to run a split screen mode and can’t see the complete path up and down the application stack. This makes it extremely challenging to troubleshoot issues, and key metrics such as MTTR and MTTX inevitably increase.

This is why organisations need to implement an observability platform which can span across both cloud native and on-premises environments – with telemetry data from cloud native environments and agent-based entities within legacy applications being ingested into the same platform. This unified visibility and insight are vital for IT teams to cut through data noise and complexity and to make informed, real-time decisions based on business impact.

  • IT teams need to manage scale and speed in an on-premises environment

One of the big advantages of cloud computing is that it enables organisations to scale their use of IT automatically and dynamically, with minimal or zero human input. But within an on-premises environment, it’s down to IT teams to manage scale and speed themselves.

This becomes particularly challenging when there are major fluctuations in demand. In many industries, there are always spiking events at particular points in the calendar. Retail has Black Friday and Cyber Monday, tax and revenue services have deadlines for tax returns and payments and financial services firms see huge increases in payment transactions around major holidays.

IT teams need to be prepared to handle these changes in demand, particularly when they’re deploying on-premises applications and infrastructure. They can’t afford disruption or downtime with their business critical applications at the most important moments of the year.

In order to manage these types of surges in demand, technologists need tools which provide dynamic baselining capabilities to trigger additional capacity within their hyperscale environment. This alleviates the huge pressures on IT teams managing on-premises applications during the busiest times of the year, and enables them to focus their attention on strategic, customer-facing priorities.

While the move to modern application stacks will undoubtedly gather pace, many organisations will continue to run some (or most) of their IT estate on-premises for some time to come. And, as we have covered, the shift might never happen in some industries.

IT leaders therefore can’t ignore the present and focus all their attention on the future. They need to provide their technologists with the tools and insights required to optimise availability and performance within on-premises and hybrid environments, and the capabilities to predict and respond to spikes in demand.

With a hybrid observability strategy, IT teams can correlate telemetry data into the overarching mix of already instrumented applications through traditional agent-based monitoring systems. This unified visibility across on-premises and cloud environments will enable technologists to deliver seamless digital experiences at all times, both now, and in the future.

Gregg Ostrowski, CTO Advisor, Cisco AppDynamics