Overview of Ingress Resources in Kubernetes

By default most inter-component communication within Kubernetes is internal to the cluster without any form of NAT. Pods within the cluster can be created and destroyed at any time meaning inter-pod communication is done via services allowing a static IP address to be created that can be used to reach the desired Pods (normally done by the use of suitable selectors).

The use of Ingress Controllers within Kubernetes exposes http and https routes from outside the cluster to services within it. Traffic routing is controlled by rules that are defined on the Ingress resource.

In this Lab we are going to create some Pods and services within our cluster that are going to serve the following domains:

http://website1.example.com
http://website2.example.com
http://whoami.example.com

We will install some ingress controllers that will have rules provided by an ingress resource that will route the traffic to the correct internal services. From these services the traffic will reach the desired NGINX Pods serving the relevant website.

The external requests will go via an HA-Proxy load balancer which will send the traffic to the two worker nodes within the cluster.

The Lab consists of 4 Virtual Machines running Ubuntu 18.04 in Virtual Box that are connected by an internal Host network on 192.168.200.0/24

Once the Lab has been setup the websites will be checked by looking at them in a browser running on the Host running the Virtual Machines.

Ingress Controllers

Ingress within Kubernetes is controlled by rules that are set by an ingress resource. The ingress resource is created and rules must be written that will link the incoming http and https requests to services running within the cluster.

The services that are linked will then load balance to the associated end points of the Pods. In order to work the ingress resource must also be supported by ingress controllers.

The Ingress controller is an application that configures an http load balancer according to Ingress resources. The load balancer can be a software load balancer running in the cluster or a hardware/cloud load balancer running externally. Different load balancers require different Ingress controller implementations.

There are a number of ingress controllers available with details available at Kubernetes.io. The ingress controllers are not part of the standard Kubernetes build and actual installation will vary.

Installation of NGINX Ingress Controllers

A well known and popular Ingress Controller is provided by NGINX and this is the version that will be used in the lab. The installation will be done by cloning the relevant github software and running the appropriate manifests.

The detailed instructions can be found at NGINX Ingress Controller Site

The ingress controllers are created as Pods running within the cluster in their own namespace. The various manifest files that are run create the ingress controller and generates the necessary service accounts.

The process is done on the controller within the cluster.

$ git clone https://github.com/nginxinc/kubernetes-ingress/
$ cd kubernetes-ingress/deployments
$ git checkout v1.8.1

The NGINX Ingress controller is actually deployed in it's own namespace which means that all the components can be easily removed by simply deleting the namespace.

The first step is to run the manifest to create the namespace and service account for the controller.

kubectl apply -f common/ns-and-sa.yaml

This is followed by creating a cluster role and role-binding.

kubectl apply -f rbac/rbac.yaml

A TLS certificate and key are created. The default settings already have a certificate and key within them and as this is a test environment this will be ok. In a production environment a new certificate and key should be generated.

kubectl apply -f common/default-server-secret.yaml

Then a configmap is created that configures the controller. In our case we will simply use the default settings.

kubectl apply -f common/nginx-config.yaml

There are two ways of deploying the actual controllers and this will depend on the number of nodes and size of the cluster.

They can either be created with a Deployment , in which case there is a choice of the number of controllers that are deployed or a DaemonSet in which case there will be a controller created on each worker node.

In our case as there are only two nodes in the cluster we will deploy a DaemonSet.

kubectl apply -f daemon-set/nginx-ingress.yaml

Ingress Resource

We now have an ingress controller running on both of the worker nodes and all traffic into the cluster is sent via the separate HA-Proxy Load Balancer running on a Virtual Machine. The Ingress Resource is written to match incoming requests and send them to the appropriate backend services.

The manifest file for our Ingress Resource is

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-resource-1
spec:
  rules:
  - host: website1.example.com
    http:
      paths:
      - backend:
          serviceName: website1
          servicePort: 80
  - host: website2.example.com
    http:
      paths:
      - backend:
          serviceName: website2
          servicePort: 80
  - host: whoami.example.com
    http:
      paths:
      - backend:
          serviceName: whoami
          servicePort: 80

The manifest is run by

kubectl create -f ingress-resource-1.yaml

Creation of Deployments and Exposure as Services

The backend Pods will be created as basic NGINX Deployments with some simple configmaps to change the default index page allowing the easy identification of the page.

The 3rd Deployment is a simple application that displays the details of the Host and IP address that is servicing the request.

Once the Deployments are created they are exposed as ClusterIP services which will serve as the permanent IP addresses for the Pods within.

It is these backend services that the rules created by the ingress resource connect to. As the traffic hits the services it is load balanced to the Pods that are within the Deployments.

The website1.example.com Deployment is the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: website1
  name: website1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: website1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: website1
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: index-volume
          mountPath: /usr/share/nginx/html
      volumes:
        - name: index-volume
          configMap:
            name: website1

The website2.example.com Deployment is the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: website2
  name: website2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: website2
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: website2
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: index-volume
          mountPath: /usr/share/nginx/html
      volumes:
        - name: index-volume
          configMap:
            name: website2

The whoami.example.com Deployment is the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: whoami
  name: whoami
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: whoami
    spec:
      containers:
      - image: containous/whoami:latest
        name: whoami
        resources: {}

The following configmaps are also created which are mounted into the NGINX Pods acting as the webservers:

vagrant@k8s-master:~/web-sites$ kubectl get configmaps 
NAME       DATA   AGE
website1   1      25h
website2   1      25h
vagrant@k8s-master:~/web-sites$ kubectl describe configmaps 
Name:         website1
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
index.html:
----

<!DOCTYPE html>
<html>

<body style="background-color:powderblue;">
<h1>This is website1</h1>

</body>
<html>


Events:  <none>


Name:         website2
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
index.html:
----

<!DOCTYPE html>
<html>

<body style="background-color:red;">
<h1>This is website2</h1>

</body>
<html>


Events:  <none>

The Deployments can then be exposed which will create our necessary services

kubectl expose deployment website1 --port=80
kubectl expose deployment website2 --port=80
kubectl expose deployment whoami --port=80

We can then check everything is running in the default namespace

We can see the 3 Deployments with the whoami one having 3 replicas within it and each is exposed as a ClusterIP service, normally only reachable from within the cluster.

The ingress controllers have been configured with the rules contained within the ingress resource. This can be checked by running the following:

kubectl get ingress
kubectl describe ingress

It can be seen that the ingress is linking to the services that were set within the rules and the services link ultimately to the end points of the Pods within the Deployments.

Checking the Connection from outside the Cluster

The only thing that remains is to prove that each of the websites can be reached from outside the cluster via the HA-Proxy load balancer that sends the traffic to the nodes.

This requires a modification to the /etc/hosts file on the Host machine to map the URLs to the HA-Proxy which in our lab sits at 192.168.200.100

sudo cat /etc/hosts
[sudo] password for salterje:
127.0.0.1       localhost
127.0.1.1       salterje-PC-X008778

192.168.200.10  k8s-master
192.168.200.100 nginx.example.com
192.168.200.100 website1.example.com
192.168.200.100 website2.example.com
192.168.200.100 website3.example.com
192.168.200.100 whoami.example.com

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

The connections are checked using a browser running on the Host machine which is able to resolve the URLs via the modification of the local hosts file.

It can be seen from the output that the whoami website is being shared between Pods within the cluster by the whoami service and the traffic is actually hitting both the ingress controller Pods running on both nodes.

Conclusions

This lab has given an overview of the use of http ingress into a Kubernetes cluster. The main components used are the ingress controllers that can run within the cluster, providing a means of linking external ingress to the internal services running.

The ingress controllers must have an associated ingress resource that sets the rules that link incoming traffic to the services.

The ingress controllers are not part of the standard build of a cluster and there are a large number of solutions available to choose from. In this lab the resource controllers were from NGINX and were set up by cloning the necessary software and running the included manifest files, allowing the applications to run as Pods within the cluster.

The Lab has been setup using an external HA-Proxy Load balancer running in it's own Virtual Machine that forwards http traffic to the two worker nodes. In the case of a cloud based solution this load balancer is often available from the provider.

By looking at the ingress it can be seen which services are linked to the incoming http and from these services the actual Pod endpoints can be determined.

The purpose of the services is to provide a permanent IP address within the cluster that the ingress resource can route traffic to. This means that Pods can come and go but will always be reachable via the service.

The use of ingress controllers and ingress resources allows the routing of incoming requests using shared components, rather then having a dedicated load balancer for each.