Kubernetes metallb bare metal loadbalancer

Kubernetes metallb bare metal loadbalancer

(Last Updated On: September 22, 2018)

Bare Metal Kubernetes deployments are no longer second class deployments.  Now you too can use LoadBalancer resources with Kubernetes metallb.

Kubernetes is very flexible in how you can deploy it.  You can deploy to cloud environments like Google Cloud, Microsoft Azure, and Amazon AWS.  You can even deploy it to on-premises clouds like Openstack.  Lasty, you can deploy Kubernetes on bare metal using several popular operating systems like Ubuntu Linux, CentOS, or Red Hat Enterprise Linux.

Kubernetes Pod Connectivity

Deploying you containerized applications to Kubernetes creates what Kubernetes calls a Pod.  Pods are the smallest deployment unit in Kubernetes.  Pods are assigned a single IP address that is internal to the Kubernetes cluster.  Other pods are able to communicate with your pod by using this IP address.  The IP address is good until the pod dies.

Pods have a life cycle and they are typically referred to as cattle rather than pets.  All pods are created, perform their functions, then they die.  If there is a problem with the pod then given this life cycle, we would just destroy it and recreate it.  We don’t really care too much about it hence the cattle reference.

The problem is the IP address that gets assigned to the pod.  Every time a pod is recreated it is assigned a new IP address.  If you have other pods that depend on this pod then you would have a problem every time the pod is recreated.  To remedy this, Kubernetes has services.  Services have an IP assigned to them and they act like load balancers to matching pods.  If the pod that is assigned to a service is recreated the service knows about the new IP and can still send traffic to it.  The service IP remains the same.

Exposing Kubernetes Services

So we have solved the pod connectivity problem, but what if we want to expose our services to the outside world?  Kubernetes services have internal cluster IPs assigned to them which are not accessible to the outside world.  Kubernetes provides several ways to expose these services: NodePort, HostNetworking, Ingress, and LoadBalancer.  I won’t cover ClusterIP here because this gives you a kubernetes cluster local IP that isn’t accessible outside the cluster. The official kubernetes documentation covers these in great detail so I will summarize here.

NodePort

This lets you expose a service using the IP address of the Kubernetes node that your application is deployed to but it uses a random ( or static if you configure it as such) TCP port between 30000-32767.  As an example, lets say you had a web application deployed that accepts traffic on port 80.  If you configure a service to expose this application using nodePort then kubernetes would assign a random port number of 30176.  This lets you access your application by browsing to the kubernetes node IP:30176.  Every time you redeploy the service this port may change.

HostNetworking

Services that use host networking configure a static port on the kubernetes node that your application is deployed to.  So if you deployed your example web application to your node you could reserve port 80 on the node and all traffic to the node’s IP on port 80 would be routed to your web application.  This is all well and good until you have multiple web applications deployed on the same node that need port 80.

Ingress

Until recently, Ingress was the best option if you deployed Kubernetes cluster on bare metal.  Ingress lets you configure internal load balancing of HTTP or HTTPS traffic to your deployed services using software load balancers like NGINX or HAProxy deployed as pods in your cluster.  Ingress makes use of Layer 7 routing of your applications as well.  The problem with this is that it doesn’t easily route TCP or UDP traffic.  The best way to do this was using a LoadBalancer type of service.  However, if you deployed your Kubernetes cluster to bare metal you didn’t have the option of using a LoadBalancer.  This was only available on cloud deployments making bare metal deployments second class deployments.

Until now.

LoadBalancer

There is a new project for Kubernetes called Metallb that changes all that.  It is still in alpha but if you are looking to have the benefits of load balancing in your bare metal Kubernetes deployment then I recommend you give it a try.

 

Kubernetes Metallb

Getting Kubernetes Metallb installed and configured is relatively easy in the basic deployment.  Mostly, all it requires is a pool of IPs that it can use to assign to your load balanced services.

 

Installing Kubernetes Metallb

Installation is a snap.  Just run the following command to install Kubernetes Metallb:

$ kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.6.2/manifests/metallb.yaml

After it is completed you just need to configure it.

Configuring Metallb

We need to tell metallb what IPs it can use.  Create a new file called metallb.yml and add the following contents:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250

 

Here we are telling it to use the pool of 10 IPs from 192.168.1.240 – 250.  If your network makes use of DHCP then make sure whatever pool you give to metallb, you exclude those IPs from the DHCP pool of available IP’s that it can assign.  Otherwise you might have network address conflicts and stuff will break.

Finally, apply the configuration to your Kubernetes cluster.

$ kubectl create -f metallb.yml

That is all there is to it.  You now have load balancing enabled on your Kubernetes cluster!  Lets go ahead and test everything out.

Testing bare metal load balancing

To test everything out we will need to deploy an example application.  Lets deploy a sample application like we normally would:

$ kubectl run nginx --image=nginx --port=80
$ kubectl expose deployment nginx --type=LoadBalancer --name=nginx-service

Our test application is now deployed.  Lets get more information about our newly created service.

kubectl describe service nginx-service
Name:                     nginx-service
Namespace:                default
Labels:                   run=nginx
Annotations:              
Selector:                 run=nginx
Type:                     LoadBalancer
IP:                       10.98.121.148
LoadBalancer Ingress:     192.168.1.241
Port:                       80/TCP
TargetPort:               80/TCP
NodePort:                   32542/TCP
Endpoints:                10.44.0.6:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason       Age   From                Message
  ----    ------       ----  ----                -------
  Normal  IPAllocated  11s   metallb-controller  Assigned IP "192.168.1.241"

 

As you can see Kubernetes assigned the IP of 192.168.1.241 to our service!  Test it out by browsing to the address.

 

And since services keep the IPs that they are assigned you can configure your internal DNS servers to resolve a hostname to the IP.

Need Kubernetes Training?

Checkout these awesome Kubernetes courses from Pluralsight.   Pluralsight gives you outstanding training at a great price.  I use Pluralsight every time I need to learn something new.

Conclusion

Thanks for reading this article.  If you liked it then please comment below.  Click here if you would like more great articles about Kubernetes from AdminTome Blog.

If you enjoyed this post, please consider donating to AdminTome Blog below. Thanks again for reading!

2 thoughts on “Kubernetes metallb bare metal loadbalancer

Leave a Comment

you're currently offline