
How public cloud is changing our approach to datacenter
There’s a whisper that the datacentre is becoming in-vogue again. And I think the whisper makes sense. Software Defined Datacentre technologies (SDDC) that are purposefully built for the Public cloud are changing perceptions towards how we approach on-premises computing.
We’ve spent so long driving workloads onto Public cloud that mentioning data centres and on-premise computing has become hush hush. It’s only discussed in whispered conversations in darkened rooms.
As someone who has designed and built modern, multi-tenant private cloud platforms and worked closely with partners such as Microsoft and Equinix, I want to share what I’ve learned about how Public cloud is modernising on-premises datacentre design.
There are many compelling reasons for keeping an on-premises footprint. I recently discussed these in an Avanade blog that looked at why Hybrid Cloud is where most enterprises are at these days. The main reasons being: performance and partner connectivity; data residency and control; incompatible services; not to mention regulatory compliance where dependency on a single Public cloud provider isn’t acceptable.
In order to maximise your opportunities for workload portability and to get to a competitive price point, if you’re adopting Hybrid, you really want the private side looking as cloud-like and cloud-compatible as the public.
When I think of Public cloud providers (Microsoft with Azure in my case) I compare them to Formula 1 race teams. They develop the advanced technologies that are necessary to gain a competitive advantage (perhaps not where Mercedes is concerned right now) which over time trickles down to the us mere mortals.
That analogy is perfect for SDDC technologies, and in my case Windows Server and Azure Stack HCI. Based on many of the building blocks of Azure, they offer a commodity Compute, Storage and Network virtualisation layer for the SDDC. They also deliver a sizeable chunk of the Infrastructure as a Service stack (IaaS) found in the big brother Azure.
Whether you run containers on Kubernetes (and everything is better on Kubernetes right?) or still install off-the-shelf software on Windows and Linux servers, the technology behind Windows Server HCI delivers SDDC at a seriously compelling price point.
SDDC technologies, such as Windows Server HCI, radically change how we approach datacentre design. That’s because they move the entire tenant/workload configuration into software, which simplifies the underlying physical design for the datacentre.
Once you standardise on a core network platform and establish perimeter defence and East/West segregation of management and tenants, at a physical level, it rarely needs touching. With tenant on-boarding and configuration managed in software, more complex changes to the applications and tenants are no longer constrained by the underlying datacentre architecture.
With the network architecture of the datacentre defined, you can use commodity compute and storage services to form the bedrock of the SDDC. The benefit being they’re low cost to procure, easy to support and favour a modular, scale-out approach to building capacity and performance.
SDDC technologies for compute and storage also allow you to aggregate both performance and capacity, and dynamically provision and configure for each application’s individual performance requirements.
Microsoft enable this with Hyper-V and Storage Spaces direct, but there are other options from other vendors. How they interoperate with different Public cloud platforms will of course vary, and as I already pointed out, you want to aim for maximum portability/compatibility with your chosen Public cloud provider.
In developing operational designs for SDDC I have found it important to focus on cost recovery. I favour implementing a charge-back model based on defined templates that allow underlying costs to be recovered efficiently depending on how the resources are being allocated (not used). This means defining ratios for compute, memory and storage, as well as charge-back for other resources such as network functions and consumables, including IP addresses (public) and bandwidth.
In short, to ensure that my SDDC recovers costs and supports expansion as needed, without the need to continually go back for more funding, I’ve tried to adopt as many of the approaches I’ve witnessed platforms such as Azure utilise.
There’s a wealth of technological advancement that companies like Microsoft have developed that create opportunities for us to level-up in the on-premises world and achieve some of the operational improvements we take for granted in Public cloud.
As I have said before, nothing is going to beat the power and value of Public cloud for transforming applications and businesses, but for what we have left on-premises, taking advantage of what SDDC has to offer is isn’t something we have done enough of.
Ducksource Newsletter
Join the newsletter to receive the latest updates in your inbox.