Introduction

We have previously looked at how RBAC works inside Kubernetes.

In this section, we will configure RBAC to support an additional mapping for an Operator Role. For simplicity sake, the permissions will be identical between both groups of Roles, the Master IAM Role that was used for creation of the Cluster, and the new Operator Role that will be created and associated with a new Cloud9 Instance.

Creating a new Cloud9 Workspace

  • Launch a new Cloud9 Workspace in your selected region and name it “EKS-Operator”

  • Create a new IAM Role with Administrator Access named “EKS-Operator” and associate it with the newly created Cloud9 Workspace Instance

  • Change the Cloud9 Workspace Instance configuration and disable the “AWS managed temporary credentials” option

  • Remove the local .aws configuration folder to clean any existing configurations

rm -vf ${HOME}/.aws/credentials
  • Create the default ~/.kube directory for storing kubectl configuration
mkdir -p ~/.kube
  • Install kubectl
sudo curl --silent --location -o /usr/local/bin/kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/kubectl

sudo chmod +x /usr/local/bin/kubectl
  • Install AWS IAM Authenticator
go get -u -v github.com/kubernetes-sigs/aws-iam-authenticator/cmd/aws-iam-authenticator
sudo mv ~/go/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
  • Install JQ and envsubst
sudo yum -y install jq gettext
  • Verify the binaries are in the path and executable
for command in kubectl aws-iam-authenticator jq envsubst
  do
    which $command &>/dev/null && echo "$command in path" || echo "$command NOT FOUND"
  done
  • Create the kubectl configuration file
aws eks --region region update-kubeconfig --name cluster_name
  • Test your configuration and validate access permissions to your EKS Cluster
kubectl get svc

The above command should fail with either a “could not get token: AccessDenied: Access denied”, “error: You must be logged in to the server (Unauthorized)” or “error: the server doesn’t have a resource type “svc”” error

  • Access the Cloud9 Workspace Instance you used to create your EKS Cluster

  • View the contents of your aws-auth ConfigMap for the Cluster. A single mapRoles configuration should exist associated with your Worker Nodes IAM Role

kubectl describe configmap -n kube-system aws-auth
  • It should look similar to this
Name:         aws-auth
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
mapRoles:
----
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::725135641014:role/eksctl-EKS-nodegroup-ng-83d04169-NodeInstanceRole-1AA8TQN72FJHO
  username: system:node:{{EC2PrivateDNSName}}
  • Edit the file describing the ConfigMap
kubectl edit -n kube-system configmap/aws-auth
  • Inside the existing mapRoles definition, add a new section referencing the IAM Role that we created and associated with the Cloud9 Workspace Instance
    - groups:
      - system:masters
      rolearn: arn:aws:iam::725135641014:role/EKS-Operator
      username: operator
  • The final file should look similar to this
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::725135641014:role/eksctl-EKS-nodegroup-ng-83d04169-NodeInstanceRole-1AA8TQN72FJHO
      username: system:node:{{EC2PrivateDNSName}}
    - groups:
      - system:masters
      rolearn: arn:aws:iam::725135641014:role/EKS-Operator
      username: operator
kind: ConfigMap
metadata:
  creationTimestamp: 2019-05-22T09:50:34Z
  name: aws-auth
  namespace: kube-system
  resourceVersion: "3242"
  selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
  uid: 0b9c0825-7c77-11e9-b386-025ea934319c
  • Write the file commiting the changes and validate that it applied successfully by describing the ConfigMap
kubectl describe configmap -n kube-system aws-auth
  • The output should be similar to this
Name:         aws-auth
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
mapRoles:
----
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::725135641014:role/eksctl-EKS-nodegroup-ng-83d04169-NodeInstanceRole-1AA8TQN72FJHO
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:masters
  rolearn: arn:aws:iam::725135641014:role/EKS-Operator
  username: operator

Events:  <none>
  • Go back to your Cloud9 Workplace Operator and attempt to perform the kubectl get svc command again
kubectl get svc
  • The command should now execute successfully and you should receive an output similar to this
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   86m