Now that all of the major public cloud providers have a clearly defined hybrid cloud solution on the market we can start to compare the different approaches from Amazon Web Services (AWS), Microsoft Azure and Google Cloud.
Hybrid cloud is an enterprise IT strategy that involves operating certain workloads across different infrastructure environments, be it one of the major public cloud providers, a private cloud, or on-premise, typically with a homegrown orchestration layer on top. Multi-cloud is a similar idea but tends to not involve private cloud or on-premise infrastructure.
This approach is particularly important to organisations with certain applications that will need to remain on-premise for the time being, such as low-latency applications on a factory floor, or those with data residency concerns.
According to the RightScale State of the Cloud report 2019, hybrid cloud is the dominant enterprise strategy, with 58 per cent of respondents stating that is their preferred approach, with 17 per cent opting for multiple public clouds and just 10 per cent opting for a single public cloud provider.
The advantages of hybrid cloud include the ability to diversify spend and skills, build resiliency and cherry pick features and capabilities depending on where they see a vendor's strengths, all while avoiding that dreaded vendor lock-in.
It is in the public cloud vendor's best interests that customers run everything in the public cloud, but they are increasingly aware that customers don't necessarily want to work this way and are providing more flexible options to accommodate for that fact.
As DataStax CEO Billy Bosworth told Computerworld UK: "It really has only been, I would say, the last 12 to 18 months, where I feel like the market has hit that decided tipping point, that multi-cloud is not an option, it's a reality."
Taking them one by one, here are the major vendor's options for running hybrid.
Microsoft Azure Stack
Microsoft has long been the go-to option for hybrid deployments amongst the big three with its well established Azure Stack, which was available in technical preview as long ago as January 2016.
It allows customers to leverage various Azure cloud services from their own data centre and, in theory, eases the transition to the cloud for highly regulated or more cautious organisations. Applications can be built for the Azure cloud and deployed either on Microsoft cloud infrastructure or within the confines of their own datacentre without rewriting any code.
Then, at the Ignite conference in November 2019, Microsoft announced the technical preview of Azure Arc, a multi-cloud management layer which essentially extends Azure Stack to other public cloud platforms, including AWS and GCP. The idea is to give customers a single view of all of their apps and services regardless of where they sit.
“Enterprises rely on a hybrid technology approach to take advantage of their on-premises investment and, at the same time, utilise cloud innovation,” Azure corporate vice president Julia White wrote in a blog post.
“As more business operations and applications expand to include edge devices and multiple clouds, hybrid capabilities must enable apps to run seamlessly across on-premises, multi-cloud, and edge devices while providing consistent management and security across all distributed locations.”
Nick McQuire, vice president of enterprise research at CCS Insight, said at the time: “With Azure Arc, and with it, the arrival of multi-cloud management in Azure, we are now seeing perhaps the biggest shift yet in Azure’s strategic evolution.”
Under the covers Azure Stack – now referred to as Azure Stack Hub by the vendor – brings a set of core services to customers' own data centres, such as virtual machines, storage, networking, VPN gateway and load balancing, as well as platform services like functions, containers and database, and identity services like active directory.
The Azure Stack can be run on hardware from a variety of partner vendors, such as HPE, Dell EMC, Cisco, Huawei and Lenovo.
It is priced in the same flexible manner as Azure public cloud, so you pay for what you use, starting at $0.008 per virtual CPU per hour, but you will be contracted for software support with Microsoft and hardware support with the chosen vendor.
For a more detailed breakdown of Azure Stack, our friends over at Network World have gone under the covers.
AWS signalled its first serious move into hybrid deployments at its re:Invent conference in 2018 with the launch of Outposts, a fully managed service where AWS delivers pre-configured hardware and software to the customer's on-premise data centre or co-location space to run applications in a cloud-native manner, without having to operate out of AWS data centres.
"Customers will order racks with the same hardware AWS uses in all of our regions, with software with AWS services on it - like compute and storage - and then you can work in two variants," AWS CEO Andy Jassy said at the time.
Those two flavours are: run VMware Cloud on AWS, or run compute and storage on-premises using the same native AWS APIs used in the AWS cloud.
"Symbolically, Outposts is another acknowledgement by AWS that most enterprises want or need to split workloads and data between on-premise systems and public cloud services," Kurt Marko, an independent technology analyst, told Computerworld at the time.
Currently customers can configure their outpost with a variety of EC2 instances and EBS volumes for storage.
Then, once the service is made generally available in late 2019, Outposts will locally support Amazon ECS and Amazon EKS clusters for container-based applications, Amazon EMR clusters for data analytics, and Amazon RDS instances for relational database services; with the machine learning toolkit SageMaker and Amazon MSK for streaming data applications promised after launch.
In a blog post published in September 2019, Matt Garman, VP of AWS compute services, added some more detail around the project and highlighted some common use cases they are seeing so far, including interest from customers in the manufacturing, healthcare, financial services, media and entertainment, and telecom industries.
"One of the most common scenarios is applications that need single-digit millisecond latency to end-users or onsite equipment," Garman wrote. "Customers may need to run compute-intensive workloads on their manufacturing factory floors with precision and quality.
"Others have graphics-intensive applications such as image analysis that need low-latency access to end-users or storage-intensive workloads that collect and process hundreds of TBs of data a day."
Outposts is slated for general availability by the end of 2019 but pricing information is still not openly available.
Google Cloud Anthos
Google Cloud made a splash in April 2018 when it announced the general availability of Anthos: a new platform which promises the ability to run applications on-premise, in the Google Cloud and, crucially, with other major public cloud providers like Microsoft Azure and Amazon Web Services (AWS).
Read more on the next page...
Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.