AWS Graviton

Setup Istio on AWS EKS with ARM nodes using AWS CDK

Patrik Zelena

--

Istio is the de facto service mesh for Kubernetes. To leverage istio, an istiod control plane and the envoy side-cars must be configured. Fortunately, istio provides us with various ways to easily install and configure the mesh.

AWS EKS is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.

Provisioning cloud applications can be a challenging process that requires multiple and even manual actions. To get around it, we usually write custom scripts and learn new domain-specific languages. Here comes the AWS Cloud Development Kit (AWS CDK) to the picture. CDKs allow us to define cloud resources with a familiar language.

Why ARM?

We can see that the industry move towards ARM architecture, with the introduction of Apple’s M1 SoC it just got more exciting, but to stay on the topic let’s take a look at AWS’s Graviton. Graviton delivers more performance per watt and security for a lower price, compared to x86 EC2 instances. In conclusion, with ARM nodes we can bring more power at a lower price. That sounds promising, right?

Challenges

To achieve our goal we should take a look at the support of ARM architecture in this area.

Kubernetes, fortunately, supports ARM nodes by default and EKS can handle Graviton-based EC2 instances as workers so in this segment we are good to go.

Istio, on the other hand, does not uses multi-arch images so we do need to step in a bit for this.

Prerequisites

  • AWS CLI configured
  • AWS CDK installed

AWS CDK allows us to use a language of our choice from the supported list, for the purpose of this I chose Java, so a JDK and a Java IDE or any editor are also considered as a prerequisite.

Creating the Cluster

To create an AWS CDK project the init command should be executed in an empty directory.

Working with the AWS CDK in Java

mkdir eks-arm
cd eks-arm
cdk init app --language java

With this, we will have a pre-created CDK App and a Stack. To keep it simple, we will use this stack.

Let’s create a Construct inside the Stack to hold the Cluster together.

Starting point

Now, we need a VPC to work with, create a brand new one or use the fromLookup function to find an existing.

final var vpc = Vpc.Builder.create(this, “Vpc”).maxAzs(1).build();

Next, an IAM Role is needed for the workers, for this the ec2.amazonaws.com service principal is used. AWS JSON policy elements: Principal

final var workerRole = Role.Builder.create(this, "EKSWorkerRole")
.roleName("EKSWorkerRole")
.assumedBy(new ServicePrincipal("ec2.amazonaws.com"))
.build();

Right now, we know the network to work in and the role that the worker nodes can use. Let’s create the cluster itself.

final var eksCluster = Cluster.Builder.create(this, "EksCluster")
.clusterName("my-arm-cluster")
.vpc(vpc)
.defaultCapacity(0)
.version(KubernetesVersion.V1_21)
.build();

As you might be noticed there is nothing ARM specific for this point. The other strange thing could be that the defaultCapacity for the cluster is explicitly set to 0. This is intentional since we are going to use an Auto Scaling Group.

final var asGroup = AutoScalingGroup.Builder.create(this, "ASG")
.vpc(vpc)
.role(workerRole)
.minCapacity(2)
.maxCapacity(4)
.instanceType(
InstanceType.of(
InstanceClass.COMPUTE6_GRAVITON2,
InstanceSize.MEDIUM))
.machineImage(
EksOptimizedImage.Builder.create()
.kubernetesVersion("1.21")
.cpuArch(CpuArch.ARM_64)
.nodeType(NodeType.STANDARD)
.build())
.updatePolicy(UpdatePolicy.rollingUpdate())
.build();

Break it down a bit. Here, we are creating an ASG within the VPC we created and using the role we created for the workers. Define the number of nodes we want, in this case, let’s use 2 nodes and let AWS spin up 2 additional to make it 4 in total. The instances we are requesting are C6g.Medium machines. Any type of Graviton-based instance can be used but I do recommend sticking with compute or memory optimized machines for workers.
Specify the AMI, since we are using these machines as EKS worker nodes, we should specify them as EksOptimizedImage for the correct Kubernetes version and of course with ARM CPU Arch.

To link the Auto Scaling group to the cluster use the connectAutoScalingGroupCapacity on the cluster object with the group and any further config if needed.

eksCluster.connectAutoScalingGroupCapacity(
asGroup,
AutoScalingGroupOptions.builder().build());

Install Istio

However, installing Istio with HELM is still considered an alpha feature this is the most convenient way to ramp up to our cluster with CDK.

Installing Istio with helm is a 3 part job. First, a base chart than the istiod and finally the gateway should be installed.

Installing the base is simple, just need to specify the correct version for it.

final var base = cluster.addHelmChart(
"IstioBase",
HelmChartOptions
.builder()
.chart("base")
.version("1.13.2")
.repository(
"https://istio-release.storage.googleapis.com/charts")
.namespace("istio-system")
.build());

This will install all the CRDs and Roles that are essential for Istio. Since there is no x86_64 image here, it can be installed as is, however, the istiod install needs some tweak. Since there is no official support, yet, the awesome community created the OCI images for aarch64. Therefore, we just need to tell to the istiod chart that we want to use those images.

final var istiod = cluster.addHelmChart(
"Istiod",
HelmChartOptions
.builder()
.chart("istiod")
.repository(
"https://istio-release.storage.googleapis.com/charts")
.version("1.13.2")
.namespace("istio-system")
.values(
Map.of("pilot",
Map.of("hub",
"ghcr.io/resf/istio"),
"global",
Map.of("hub",
"ghcr.io/resf/istio",
"tag",
"1.13.0")))
.build());

To make sure that the CRDs are ready before the istiod installed we can use the addDependency method.

istiod.getNode().addDependency(base);

Optionally we can install the gateway as well.

final var gateway = cluster.addHelmChart(
"IstioGateway",
HelmChartOptions
.builder()
.chart("gateway")
.version("1.13.2")
.repository(
"https://istio-release.storage.googleapis.com/charts")
.namespace("istio-system")
.build());

Just like with the base, it doesn’t need any change, the proxy uses an auto config based on the webhook.

Only one thing left, run cdk deploy to create the cluster with ARM nodes and Istio. That’s it!

Thank you for being with me till the end, here is the whole snippet you are looking for.

--

--

Patrik Zelena

Senior Software Engineer | Full time Kubernetes evangelist | Building AWS Solutions