In this post we’ll show some a configuration example of how to use ingress controller and one of the most popular Kubernetes services: load-balancer. In this case, I am using NGNIX ingress controller and MetalLB as the load-balancer service.
MetalLB
MetalLB is an opensource project of a load-balancer implementation for Kubernetes clusters. Public clouds are offering that service already from their own Kubernetes, like Amazon EKS, using their own cloud multi-purpose load-balancer service ELB. However, if you have your own Kubernetes working on-premises, you can install MetalLB and deploy those services directly from Kubernetes APIs, like in the Cloud.
NGNIX Ingress Controller
Ingress controller is not an actual Kubernetes Service. It’s an App, that can work via many replicas, providing a smarter Layer 7 load-balancing functions, like domain and path based routing, breaking the traffic to different App services managed by different teams in the same cluster. We chose NGINX because is among the most popular ingress controllers like HAProxy.
Then, If Ingress is sort of a Load-Balancer? why would I need MetalLB?
Few words, MetalLB also offers a simple way to enable a BGP load-balancer service. Means, you can rely on the local Network Fabric to do a true balancing of the traffic from the border leaf or datacenter gateway, with technologies like ECMP. And then, put it directly in every replica of the ingress controller to go to any of the App services from there.
The combination of those two elements, can bring an enhance experience in and outside the cluster, and an easy way for the Network Operators in your organization to team up on pro of a better service.
Lab Setup
This lab is using containerlab to simulate the Network Fabric Spine/Leaf design, and ‘Kind‘ to emulate the three control-plane nodes Kubernetes cluster.
For details about how this setup was built, please read my previous post: Calico and MetalLB working together with BGP
The next picture shows you the topology of this lab using Nokia SRLinux:

Ingress Controller Installation
Just run the manifest in my repo as follow:
kubectl apply -f ingress-install.yml
The only different with the one currently posted at NGINX Ingress controller site, I am defining three replicas (check next extract). Then, I will expose the service with a LoadBalancer service.
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
replicas: 3
After you apply the manifest then you should see all pods in running state
[root@ctl-a1 ~]# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-7k9wc 0/1 Completed 0 7d17h
ingress-nginx-admission-patch-6trtw 0/1 Completed 1 7d17h
ingress-nginx-controller-57ffff5864-g4trp 1/1 Running 0 7d2h
ingress-nginx-controller-57ffff5864-gpxnp 1/1 Running 0 7d17h
ingress-nginx-controller-57ffff5864-q5szk 1/1 Running 0 7d2h
To expose those replicas via the load-balancer, you can use the following commands. I prefer “externalTrafficPolicy” to be “Local” to avoid the source client IP to be obscured for any trouble shooting later, and also to avoid circling traffic unnecessarily.
kubectl expose deploy ingress-nginx-controller -n ingress-nginx
kubectl patch service ingress-nginx-controller -p '{"spec":{"type": "LoadBalancer", "externalTrafficPolicy":"Local"}}' -n ingress-nginx
Using hello-node deployment for testing
To test our setup of Ingress and MetalLB, we created two images to be used as examples. Both bringing the hostname of the pod, but displaying either “Service A” or “Service B”. Check the following server.js
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello Service B! Host/Pod: ' + process.env.HOSTNAME + '\n');
};
var www = http.createServer(handleRequest);
www.listen(8080);
And then, we call it in a Dockerfile and we can build the two images
## it'll display: "Hello World version ONE! Host/Podhello-node-3439300230-bg6ew"
## You can use it to test ReplicaSet and Ingress and how it's going over different containers.
## Details at cloud-native-everything.com
## Move it to a file with the name Dockerfile and build it using for example "docker build -t gcr.io/k8s-helloworld-142719/hello-node:v1 ."
FROM node:4.4
EXPOSE 8080
COPY server.js .
CMD node server.js
Then, I put those into two different images in my registry:
- pinrojas/hello-svca:v1
- pinrojas/hello-svcb:v1
And I have applied this manifest to create deployments and services. And I got the following output:
kubectl get pods | grep svc
hello-svca-5dddf9b9bb-8sh5b 1/1 Running 0 7d2h
hello-svca-5dddf9b9bb-cjmpr 1/1 Running 0 7d2h
hello-svca-5dddf9b9bb-mkjlf 1/1 Running 0 7d2h
hello-svca-5dddf9b9bb-v7rkj 1/1 Running 0 7d2h
hello-svcb-5cb5649f58-2cmvw 1/1 Running 0 7d2h
hello-svcb-5cb5649f58-f6p45 1/1 Running 0 7d2h
hello-svcb-5cb5649f58-fjdp9 1/1 Running 0 7d2h
hello-svcb-5cb5649f58-qj6dk 1/1 Running 0 7d2h
Optionally, you can patch the services to use LoadBalancer instead of NodePort.
kubectl patch service hello-svcb -p '{"spec":{"type": "LoadBalancer", "externalTrafficPolicy":"Local"}}'
kubectl patch service hello-svca -p '{"spec":{"type": "LoadBalancer", "externalTrafficPolicy":"Local"}}'
Final Results
Well, Finally to test this setup of Ingress and MetalLB, from my server connected to my border leaf (must be an external host), I’ll test my ingress via the LoadBalancer IP with curl. Remember my LoadBalancer service is been deployed using MetalLB with BGP.
I will add the LoadBalancer Service IP to my /etc/hosts as follow (Use the same IP for both services, remember Ingres must split the traffic and forward it to the correct deployment based on the host rule)
# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.20.20.3 client-1
10.254.254.243 myservicea.foo.org
10.254.254.243 myserviceb.foo.org
And then, I tested as at follow
bash-5.1# for i in {1..20}; do curl http://myservicea.foo.org; done
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-8sh5b
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-mkjlf
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-8sh5b
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-8sh5b
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-cjmpr
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-cjmpr
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-mkjlf
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-cjmpr
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-mkjlf
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-8sh5b
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-cjmpr
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-mkjlf
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-mkjlf
Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj
bash-5.1# for i in {1..20}; do curl http://myserviceb.foo.org; done
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-fjdp9
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-qj6dk
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-qj6dk
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-qj6dk
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-qj6dk
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-qj6dk
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw
Hello Service B! Host/Pod: hello-svcb-5cb5649f58-fjdp9
please, don’t forget to comment, see ya!