Skip to main content

Prerequisites

Overview

To install and set up the Kube OBLV Stack chart, there are a few important steps that need to be completed. These steps involve configuring the necessary dependencies and infrastructure required for proper functionality. The process includes:

  1. Setting up the EKS Nodegroup with Nitro Enclaves support to enable enclave-enabled worker nodes in your Kubernetes cluster.
  2. Configuring the Service Account and attaching the required IAM policies to ensure secure access to resources.
  3. Setting up additional controllers, such as the AWS Load Balancer Controller and the Bitnami External DNS Addon, to manage load balancers and DNS records for your applications.

Once these configurations are in place, you can proceed with the installation process.

Users guide

This page content is for administrators who want to deploy an application with OBLV Deploy. If you are a user and want to connect to a deployed application, refer to the Making an Attested Connection guide.

The oblv-deploy-stack is a Helm chart designed for Kubernetes deployment. It includes the OBLV Deploy Helm chart, which installs OBLV Deploy, as well as the required dependencies. Each dependency below is a controller that needs its own configuration. You need to create the service account and link their IAM Policy to it:

The Kube OBLV Stack chart can be installed with all the bundled dependencies, or you can choose to install only the required ones if they have already been installed in the cluster.

How to choose dependencies

When installing oblv-deploy-stack umbrella chart, you should set the enabled value to true for controllers or subcharts that you want to install. For example, if the AWS Load balancer controller is not already installed, you would set the enabled=true value:

  --set aws-load-balancer-controller.enabled=true

Required Tools

Before starting, ensure you have the following tools installed and configured:

  • bash shell: A Unix shell to execute commands.
  • AWS CLI version 2: Used to interact with AWS services. For installation instructions, see Getting started with the AWS CLI.
  • eksctl: A command-line tool for creating and managing Kubernetes clusters on Amazon EKS. For installation instructions, see Installing or updating eksctl.
  • jq: A lightweight and flexible command-line JSON processor. For installation instructions, see Download jq.
  • kubectl (version 1.20 or later): The Kubernetes command-line tool for deploying applications, managing cluster resources, and viewing logs. For installation instructions, see Installing or updating kubectl.

Prerequisites for EKS Nodegroup

To enable Nitro Enclaves in your Kubernetes cluster, you need to first configure the EKS Nodegroup with Nitro Enclaves support. This process involves creating a launch template and adding a node group to your existing cluster to ensure enclave-enabled worker nodes are available.

Reference

For more details, refer to the AWS Nitro Enclaves Kubernetes Guide.

1
Create a Launch Template with Nitro Enclaves Support

Create a launch template that will be used to launch enclave-enabled worker nodes (Amazon EC2 instances) in the cluster. When creating the launch template, ensure the following:

  • Specify a supported instance type, such as m5.xlarge or c5.xlarge. Refer to the AWS Nitro Enclaves documentation for a complete list of supported instance types.
  • Enable Nitro Enclaves.
  • Add the following user data to automate the AWS Nitro Enclaves CLI installation and preallocate memory and vCPUs for enclaves. The CPU_COUNT and MEMORY_MIB variables in the user data specify the number of vCPUs and amount of memory (in MiB) respectively for the enclave in the EC2. Ensure the values are configured based on your requirements.
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"

#cloud-config
bootcmd:
- dnf install aws-nitro-enclaves-cli -y

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash -e
readonly NE_ALLOCATOR_SPEC_PATH="/etc/nitro_enclaves/allocator.yaml"
# Node resources that will be allocated for Nitro Enclaves
readonly CPU_COUNT=2
readonly MEMORY_MIB=1024

# Update enclave's allocator specification: allocator.yaml
sed -i "s/cpu_count:.*/cpu_count: \$CPU_COUNT/g" \$NE_ALLOCATOR_SPEC_PATH
sed -i "s/memory_mib:.*/memory_mib: \$MEMORY_MIB/g" \$NE_ALLOCATOR_SPEC_PATH
# Restart the nitro-enclaves-allocator service to take changes effect.
systemctl enable --now nitro-enclaves-allocator.service
echo "NE user data script has finished successfully."
--==MYBOUNDARY==--
Using AWS CLI

You can use the AWS CLI to set up the launch template. The following shell script demonstrates how to create the launch template. You can customize the instance type, AWS region, launch template name, and the enclave's CPU and memory allocation as needed.


# Define variables for the launch template configuration
readonly INSTANCE_TYPE=<your-instance-type> # Replace with your instance type, e.g., m5.2xlarge
readonly AWS_REGION=<your-aws-region> # Replace with the AWS region where your EKS cluster is running
readonly ENCLAVE_CPU_COUNT=2 # Replace with the number of CPUs allocated for the enclave
readonly ENCLAVE_MEMORY_MIB=1024 # Replace with the memory (in MiB) allocated for the enclave
readonly LT_NAME=<your-launch-template-name> # Replace with your desired launch template name

# Define the user data script for the launch template
readonly lt_user_data=$(cat<<EOF
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"

#cloud-config
bootcmd:
- dnf install aws-nitro-enclaves-cli -y

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash -e
readonly NE_ALLOCATOR_SPEC_PATH="/etc/nitro_enclaves/allocator.yaml"
# Node resources that will be allocated for Nitro Enclaves
readonly CPU_COUNT=$ENCLAVE_CPU_COUNT
readonly MEMORY_MIB=$ENCLAVE_MEMORY_MIB

# Update enclave's allocator specification: allocator.yaml
sed -i "s/cpu_count:.*/cpu_count: \$CPU_COUNT/g" \$NE_ALLOCATOR_SPEC_PATH
sed -i "s/memory_mib:.*/memory_mib: \$MEMORY_MIB/g" \$NE_ALLOCATOR_SPEC_PATH
# Restart the nitro-enclaves-allocator service to take changes effect.
systemctl enable --now nitro-enclaves-allocator.service
echo "NE user data script has finished successfully."
--==MYBOUNDARY==--
EOF
)

# Encode the user data script in base64
readonly b64_lt_user_data=$(printf '%s' "$lt_user_data" | base64 | tr -d '\n')

# Define the launch template data
readonly launch_template_data=$(cat <<EOF
{
"InstanceType": "$INSTANCE_TYPE",
"EnclaveOptions": {
"Enabled": true
},
"UserData" : "${b64_lt_user_data}"
}
EOF
)

# Create the launch template using the AWS CLI
aws ec2 create-launch-template \
--region $AWS_REGION \
--launch-template-name $LT_NAME \
--launch-template-data "$launch_template_data"
Note

After creating the launch template, make sure to copy the launch template ID generated in this step. You will need it for subsequent configurations

2
Add a Node Group to the Existing Cluster

Use the launch template created in the previous step to add a node group to your existing Amazon EKS cluster. Create a configuration file (e.g., nodegroup_config.yaml) with the following content:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: <your-cluster-name> # Replace with your existing cluster name
region: <your-region> # Replace with your AWS region

managedNodeGroups:
- name: nitro-enclaves-group
launchTemplate:
id: lt-01234567890abcdef # Replace with your launch template ID
version: "1" # Replace with your launch template version
desiredCapacity: 2 # Number of nodes to add
labels:
aws-nitro-enclaves-k8s-dp: enabled

Run the following command to add the node group:

eksctl create nodegroup -f nodegroup_config.yaml

After the node group is created, verify the nodes using kubectl get nodes.

Note

If your EKS cluster is not managed by eksctl, you may need to manually provide the VPC, securityGroup and subnet information.

Retrieve VPC and Subnet Details

You can retrieve the VPC and subnet details using the following commands:

readonly AWS_REGION=<your-region>     # Replace with your AWS region
readonly CLUSTER_NAME=<your-cluster-name> # Replace with your existing cluster name

# Retrieve VPC details
aws eks describe-cluster \
--name $CLUSTER_NAME \
--region $AWS_REGION \
--query "cluster.resourcesVpcConfig" \
--output json | jq

# Retrieve subnet IDs and format them
subnet_ids=$(aws eks describe-cluster \
--name $CLUSTER_NAME \
--region $AWS_REGION \
--query "cluster.resourcesVpcConfig.subnetIds" \
--output json | jq -r '.[]' | xargs)

# Display subnet details in a table format
aws ec2 describe-subnets \
--subnet-ids $subnet_ids \
--region $AWS_REGION \
--query "Subnets[*].{ID:SubnetId,AZ:AvailabilityZone,MapPublicIpOnLaunch:MapPublicIpOnLaunch}" \
--output table | cat
Updated Nodegroup ClusterConfig

Use the following configuration to create a node group with private networking and specific subnets:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: <your-cluster-name> # Replace with your existing cluster name
region: <your-region> # Replace with your AWS region

managedNodeGroups:
- name: nitro-enclaves-group
launchTemplate:
id: lt-01234567890abcdef # Replace with your launch template ID
version: "1" # Replace with your launch template version
desiredCapacity: 1 # Number of nodes to add
labels:
aws-nitro-enclaves-k8s-dp: enabled
privateNetworking: true # Use private subnets
subnets:
- subnet-xxxxxxxxxxxxxxxxx # Replace with your private subnet ID for Zone 1
- subnet-yyyyyyyyyyyyyyyyy # Replace with your private subnet ID for Zone 2
- subnet-zzzzzzzzzzzzzzzzz # Replace with your private subnet ID for Zone 3

vpc:
id: vpc-xxxxxxxxxxxxxxxxx # Replace with your VPC ID
subnets:
private:
<your-zone-1>: # Replace <your-zone-1> with the name of your first availability zone
id: subnet-xxxxxxxxxxxxxxxxx # Replace with your private subnet ID for Zone 1
<your-zone-2>: # Replace <your-zone-2> with the name of your second availability zone
id: subnet-yyyyyyyyyyyyyyyyy # Replace with your private subnet ID for Zone 2
public:
<your-zone-3>: # Replace <your-zone-3> with the name of your third availability zone
id: subnet-zzzzzzzzzzzzzzzzz # Replace with your public subnet ID for Zone 3
securityGroup: sg-xxxxxxxxxxxxxxxxx # Replace with your security group ID
3
Install the Nitro Enclaves Device Plugin

Deploy the Nitro Enclaves Device Plugin to your Kubernetes cluster. You can follow the steps in the AWS Nitro Enclaves Kubernetes Guide to install the plugin.

kubectl apply -f https://raw.githubusercontent.com/aws/aws-nitro-enclaves-k8s-device-plugin/main/aws-nitro-enclaves-k8s-ds.yaml
4
Verify the Nitro Enclaves Device Plugin

After deploying the plugin, verify that it is running correctly:

kubectl get pods -n kube-system | grep nitro-enclaves

Ensure the plugin pod is in the Running state.

Prerequisites to pull Enclave Image File

To enable the enclave pod to securely access necessary resources, we need to configure AWS S3 access. This involves creating an IAM policy that grants the required permissions for the enclave pod to pull in the Enclave Image File, which is essential for booting up the enclave services. Additionally, we will create a Service Account and attach the IAM Policy to it to ensure the enclave pod has the necessary permissions.

1
Create the required IAM policy document aws_s3_access.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:Describe*",
"s3-object-lambda:Get*",
"s3-object-lambda:List*"
],
"Resource": "*"
}
]
}
2
Using the created policy document, create an IAM Policy for the enclave pod.
  aws iam create-policy \
--policy-name "EnclavePodS3AccessPolicy" \
--policy-document file://aws_s3_access.json
Important

Remember to copy the Amazon Resource Names (ARN) of the IAM Policy object returned by this command. The ARN will be used in the next step.

3
Create a Service Account for the enclave pod and attach the IAM Policy to it.
  eksctl create iamserviceaccount \
--name enclave-pod \
--namespace default \
--cluster ${CLUSTER_NAME} \
--attach-policy-arn={ARN of the created IAM Policy from the previous step} \
--approve \
--override-existing-serviceaccounts \
--region ${CLUSTER_REGION}

Prerequisites for Updating Pod Metadata in the Kubernetes API Server

Once we have the service account created, we can create a role to patch pod metadata in the Kubernetes API Server and then bind the role to the service account.

1
Step 1: Create the Kubernetes RBAC Configuration File (patch_pod_rbac.yaml)

Define the RBAC configuration to grant the required permissions for patching pod metadata. Save the following YAML content to a file named patch_pod_rbac.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: patch-pod-role
namespace: default
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: patch-pod-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: patch-pod-role
subjects:
- kind: ServiceAccount
name: enclave-pod
namespace: default
2
Step 2: Apply the RBAC Configuration
kubectl apply -f patch_pod_rbac.yaml
3
Step 3: Verify the Service Account Permissions
kubectl auth can-i patch pods \
--as=system:serviceaccount:default:enclave-pod \
--namespace=default
Expected Output

The command should return yes, indicating that the service account has the required permissions.

Prerequisites for the LoadBalancer Controller

The AWS LoadBalancer Controller spins up load balancers to expose applications running inside the enclaves for its users. The policy document required for the AWS LoadBalancer controller can be downloaded.

1
Download the policy document:
  curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.12.0/docs/install/iam_policy.json
2
Using the downloaded policy document, create the IAM Policy for the LoadBalancer Controller:
  aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
Important

Remember to copy the Amazon Resource Names (ARN) of the IAM Policy object returned by this command. The ARN will be used in the next step.

3
Create a service account for the AWS LoadBalancer Controller and attach the policy to it:
  eksctl create iamserviceaccount \
--cluster=${CLUSTER_NAME} \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn={ARN of the created IAM Policy from the previous step} \
--region ${CLUSTER_REGION} \
--approve
Reference

You can find more information about these commands in the LoadBalancer Controller reference page.

Prerequisites for the External DNS Addon

The External DNS controller creates DNS records pointing at the Load Balancers, so that users can just use the hostname while connecting to the applications hosted inside the enclaves.

For the External DNS controller to work, it requires permissions to list and change Route53 resources inorder to sync the DNS records as and when needed.

1
Create a policy document named `external_dns_iam_policy.json`:
  {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/${HOSTED_ZONE_ID}"
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets"
],
"Resource": [
"*"
]
}
]
}
2
Using the created policy document, create an IAM Policy for the DNS Controller.
  aws iam create-policy \
--policy-name "ExternalDNSUpdatesPolicy" \
--policy-document file://external_dns_iam_policy.json
Important

Remember to copy the Amazon Resource Names (ARN) of the IAM Policy object returned by this command. The ARN will be used in the next step.

3
Create a Service Account for the External DNS Controller and attach the IAM Policy to it.
  eksctl create iamserviceaccount \
--name external-dns \
--namespace kube-system \
--cluster ${CLUSTER_NAME} \
--attach-policy-arn={ARN of the created IAM Policy from the previous step} \
--approve \
--override-existing-serviceaccounts \
--region ${CLUSTER_REGION}
Reference

You can find more information about these commands in the External DNS Addon reference page.

What's Next?

After installing everything OBLV Deploy needs to run, you can access the Installation and Setup page to continue with the Getting Started guide.