Google announced the availability of Google Kubernetes Engine (GKE) Autopilot, a fully-managed and operated Kubernetes environment for Google Cloud customers. The underlying infrastructure is completely hidden from users while exposing the environment needed for running cloud native workloads. 

Google Kubernetes Engine is one of the first managed container orchestration services available in the public cloud. Since its launch in 2015, Google has been enhancing the service to make it enterprise-ready. GKE Autopilot is the latest move to accelerate the adoption of Google Cloud and create a unique differentiation for Google. 

Like any distributed platform, Kubernetes has two components – the control plane and worker nodes. The control plane is responsible for managing the entire cluster infrastructure and the workloads running on it. The nodes act as the workhorses that run customer applications packaged as containers. 

When Kubernetes became available as a managed service, cloud providers owned and managed the control plane, which is the critical part of the cluster infrastructure. Since the worker nodes are essentially a set of virtual machines, they have always been accessible to users. In GCE, the worker nodes translate to a set of Google Compute Engine instances for customers. 

There are two aspects to running a managed Kubernetes cluster in the cloud. The first is making the right choice of compute, storage and network configuration, while the second is maintaining the worker nodes as a part of day-2 operations. The former deals with selecting the right size of VMs, choosing the container network interface, and overlay storage. Once the cluster is provisioned and running, customers need to manage and maintain the worker nodes. Depending on the OS, network, and storage stack, they may have to perform on-going maintenance, patching, and upgrades of the worker nodes. Despite being a managed service, container orchestration leaves quite a bit of management and configuration to customers. It is firmly grounded in the philosophy of shared responsibility applicable to most public cloud-based services. 

With GKE Autopilot, Google wants to manage the entire Kubernetes infrastructure and not just the control plane. It dramatically reduces the decisions that need to be made during the creation of the cluster. The stack chosen for GKE Autopilot by Google has the best of the breed components such as shielded VMs, VPC-based public/private network, CSI-based storage, among others. 

GKE Autopilot aims to simplify the choices for provisioning a secure and production-grade cluster infrastructure. There are very few knobs and switches available during the provisioning of a GKE Autopilot cluster. You don’t even have to decide the number of worker nodes and their configuration while creating the cluster. The autopilot service will determine the best in class configuration and the ideal fleet size at runtime based on the characteristics of the workload you deployed. 

The most exciting aspect of GKE Autopilot is the billing based on the unit of deployment, the pod.

GKE Standard, the original avatar of GKE, has a flat cluster management fee plus the cost of GCE instances. It doesn’t matter how many pods – the fundamental unit of deployment in Kubernetes – you run in the cluster. You are always charged for the number of GCE instances. 

With GKE Autopilot, the fundamental unit of deployment used for calculating the bill shifts from the VM to a pod. While the flat cluster management fee remains, you would only pay for the compute, memory, and storage resources consumed by the deployed pods. By default, GKE Autopilot assigns half-a-CPU, 2 GiB of RAM and 1 GiB of storage to a pod. Of course, this can be overridden by explicitly mentioning the resource requirements in the pod specification. 

Behind the scenes, GKE Autopilot implements an autoscale policy that dynamically adds and removes the worker nodes to accommodate the workload requirements. You wouldn’t be charged for the additional worker nodes as the unit of deployment and billing is based on the number of pods and not the number of nodes.

GKE Autopilot takes Kubernetes-as-a-Service to the next level by entirely abstracting the infrastructure. It comes close to a Platform-as-a-service model where developers are expected to bring their source code and walk away with a URL. I am waiting for Google to add Istio and Knative to GKE Autopilot, which brings true platform capabilities, including the ability to scale to zero.

GKE Autopilot comes with its own set of limitations. If you need absolute control and customization of the environment, GKE Standard is still the best choice. For example, configuring 3rd party storage platforms such as Portworx by Pure Storage or a network policy based on Tigera Calico is not supported by GKE Autopilot. Adding nodes with AI accelerators based on GPU or TPU is not available either. Deploying applications from the marketplace is another capability missing from GKE Autopilot.

Power users with advanced scenarios will continue to use GKE Standard while GKE Autopilot becomes the first-time Kubernetes users’ choice. 

At the time of launch, only Datadog monitoring and GitLab CI/CD capabilities are fully integrated with GKE Autopilot. Other 3rd party services are expected to become available in the future.

It’s interesting to see the shift in the unit of deployment. For a long time, the VM remained as the fundamental unit of deployment and billing. With the introduction of managed Kubernetes, clusters became the deployment unit. Services such as AWS Fargate for EKS and GKE Autopilot, the pod has become the lowest common denominator as the deployment and billing unit. 

With GKE Autopilot, Google delivered another industry first by removing the complexity of running cloud native workloads while creating a strong differentiation factor for its cloud platform.

Source link


Please enter your comment!
Please enter your name here