Easily Install Kubernetes on Ubuntu Step by Step

Easily Install Kubernetes on Ubuntu Step by Step

(Last Updated On: September 22, 2018)

This post will give you easy, step-by-step instructions on how to install Kubernetes on Ubuntu 18.04 including Ingress and the Dashboard.

It’s no secret that Kubernetes is THE container orchestration platform these days.  If you have been wanting to give it a try but installing Kubernetes seems like a daunting task then look no further.  In this step-by-step guide, I will walk you through installing Kubernetes on Ubuntu Server 18.04 servers including the Kubernetes Dashboard and an Ingress Controller to expose your containerized applications to the rest of your network or the outside world.  I have tried to make this an all-inclusive guide meaning that I wanted to make sure that you have a fully usable Kubernetes cluster that will work very similar to what you would expect in production.

Getting Everything Ready

For this guide we will keep it simple and install a single master Kubernetes Cluster.  One Kubernetes Master and one Kubernetes Node.  I created two virtual machines:

NamevCPUsMem (Mb)Role
kubemaster44096Kubernetes Master
kubeslave48192Kubernetes Node

Make sure you give the Kubernetes Node enough disk space for your applications to utilize.  Also make sure that you have static IPs assigned to all the servers and if you have internal DNS servers add the IPs and hostnames to your DNS configuration.

Install Ubuntu Server 18.04 on all your servers.  Update everything and reboot.

# apt update && apt upgrade -y
# reboot

This will ensure that if the kernel was updated then we will make use of it.

Next we need to disable swap on all your Kubernetes servers:

# swapoff -a

Also edit your /etc/fstab file and comment out the swap line:

#/swap.img none swap sw 0 0

Configure DNS (Optional)

If you have internal DNS servers and want to be able to access your applications using the Layer 7 routing functionality of the Kubernetes NGINX Ingress Controller then you can configure your internal DNS servers to route traffic to your Kubernetes slaves.

For example, if you deployed a web page to your Kubernetes Cluster and you want to access it with a standard URL:


Then you would follow these steps.  You most likely already have your DNS servers serving name resolution on your internal root domain (i.e. – admintome.lab in this example).  You can configure your DNS servers to resolve names for your subdomain (i.e. kube.admintome.lab).  We will forward all requests to our Kubernetes Slave IP address.  This means that if your browse to *.kube.admintome.lab you will always get the same IP address: the IP address of your Kubernetes Slave.

These instructions are for BIND9 DNS Servres (what I use internally).  The first thing we need to do is edit the /etc/bind/named.conf.local file as root and add a new zone for our subdomain:

zone "kube.admintome.lab" {
        type master;
        file "/etc/bind/db.kube.admintome.lab";
        allow-transfer {; };

Save and Exit the file.  Now create a new file called /etc/bind/db.kube.admintome.lab and add the following contents:

; BIND data file for kube.admintome.lab
$TTL    604800
@               IN      SOA     kube.admintome.lab. root.kube.admintome.lab. (
                     2018062901         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
                IN      A
@               IN      NS      ns1.admintome.lab.
@               IN      A
@               IN      AAAA    ::1
*               IN      A

Notice the last line there has the wildcard ‘*’ to capture all lookups and forward to which is the IP of my Kubernetes Slave.  Be sure to adjust these files for your specific domain.  Save the file and Exit.  Restart bind to make the changes take effect

# systemctl restart bind 9

To test it out, you can ping anything.kube.admintome.lab and it should ping your Kubernetes Slave.

[email protected]:~/Development/kube$ ping dontexist.kube.admintome.lab
PING dontexist.kube.admintome.lab ( 56(84) bytes of data.
64 bytes from kubeslave.admintome.lab ( icmp_seq=1 ttl=64 time=0.343 ms
64 bytes from kubeslave.admintome.lab ( icmp_seq=2 ttl=64 time=0.473 ms
64 bytes from kubeslave.admintome.lab ( icmp_seq=3 ttl=64 time=0.563 ms
--- dontexist.kube.admintome.lab ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2036ms
rtt min/avg/max/mdev = 0.343/0.459/0.563/0.093 ms
[email protected]:~/Development/kube$

We are all setup to utilize the Kubernetes Ingress later with Layer 7 routing!  Now let’s install Kubernetes in the next section.

Install Kubernetes

Kubernetes itself runs in containers so we need to install docker on all of our Kubernetes Servers.

# apt install docker.io

Install kubeadm, kubelet and kubectl

We will use kubeadm to boostrap our cluster and keep things easy for us.  the kubelet app does all the work of running the Kubernetes containers and the kubectl program is what we will use to interact with our cluster.  Run these commands on both your servers.

# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
# apt update
# apt install -y kubelet kubeadm kubectl

I know it says kubernetes-xenial there but it still works in 18.04.

Bootstrap your Kubernetes Master

Now let’s get our Kubernetes Master running.

# kubeadm init

This will take few minutes to complete.  The last several lines of the output tell you what to do next:

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

Make sure you copy this and save it somewhere like a text file.  We will need this line later on when we configure our Kubernetes Slave.

The very first thing you have to do after you initialize your Kubernetes Master is to configure the networking addon.  In this guide, we will configure Weave to handle our pod networking for us because it seems to be the most widely used these days.

Configure Weave Net

Run the following command to allow bridging:

# sysctl net.bridge.bridge-nf-call-iptables=1

Then run the following command to install the Weave Net Addon:

# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Now our pods will have connectivity to each other.

Bootstrap your Kubernetes Slave

Now that we have our Kubernetes master ready, the next task to install Kubernetes is to bootstrap the Kubernetes Slave.  Remember that command we saved earlier that ‘kubeadm init’ gave us?  Logon to your Kubernetes Slave and run the command as root:

# kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

It should eventually come back and tell you “Node Join Complete“.  You now have a basic Kubernetes Cluster up and running. The next section will cover accessing our cluster from our development system so we can manage it remotely.

Managing your Cluster

I do everything from my development system.  This means that I will need to connect to my Kubernetes Cluster from my development system so I can do stuff like deploy applications, services, and more without having to login to my Kubernetes Master.

On your development system install kubectl much like we did before on our servers:

# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat </etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
# apt update
# apt install -y kubectl

Now we need to tell it how to connect to our remote cluster.  Log back into your Kubernetes Master and cat the ~/.kube/config file:

# cat ~/.kube/config

Copy the contents to you clipboard.  Create the same file on your local development system (including the .kube directory in your home directory) and paste in the contents.

dev:~$ mkdir .kube
dev:~$ vim .kube/config

After pasting the contents of the file, save and exit.  Now you can run kubectl commands on your development system to manage your new Kubernetes Cluster!

$ kubectl cluster-info

As a side note, I like to create a directory where I keep all my Kubernetes Yaml files for configuring everything on my Kubernetes Cluster.  You can do the same using this command:

$ mkdir -p ~/Development/kube

I will be using this directory throughout the rest of this guide.

In the next section we will add the Kubernetes Dashboard Addon and configure it.

Kubernetes Dashboard

The Kubernetes Dashboard is a nice web-based GUI interface for our Kubernetes Cluster.  It’s pretty easy to install, just run this command:

dev:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Installing it is simple but actually accessing it takes a little effort but nothing too bad.  We need to be able to access the dashboard outside our Kubernetes Cluster.  To do this we create a proxy that will allow all outside sources to access our dashboard.  In production, this would be a bad idea but we are doing it here because our Kubernetes Cluster isn’t exposed to the entire world.  Run the following command on your Kubernetes Master:

kubemaster:~# kubectl proxy --accept-hosts='^.*$' --address='{your_server_ip}' &

Make sure to substitute your actually IP address for your Kubernetes Master.  This will run in background and allow you to access the dashboard.

The next thing we have to do is adjust the security to allow us to access the dashboard.  Again, this isn’t a good idea in production but in this example we should be OK.

Create a new file ~/Development/kube/dashboard-admin.yml with these contents:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
  name: kubernetes-dashboard
    k8s-app: kubernetes-dashboard
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

Save and Exit the file.  Now run the following command to create the Role Binding resource.

dev:~/Development/kube$ kubectl create -f dashboard-admin.yml

You should now be able to access your Kubernetes Dashboard by browsing to this URL

http://{your kubernetes master hostname}:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

You should see the Kubernetes Dashboard login.

Kuberentes Dashboard Login

Click on Skip and you are all logged in:

Install Kubernetes Dashboard

The last thing we need to do to install Kubernetes is to Install an Ingress Controller.

Kubernetes Ingress

What is ingress?  Normally in Kubernetes we have several ways to expose our services externally from our cluster: NodePort and LoadBalancer.  Nodeport pics a random port > 30000 and assigns it to the exposed port of your container.  So if deployed a web page on an NGINX container, then NodePort would let you access that page with a URL like

http://{your kube slave ip}:31717

Kubernetes routes traffic from its slave ip on port 31717 to your NGINX container’s port 80.  The 31717 port here is just an example, the ports are picked at random.

The LoadBalancer will work only if you have your Kubernetes Cluster deployed on a cloud like Amazon AWS, Google GCD, Microsoft Azure, or Openstack.  We are installing on bare metal so we can’t make use of Loadbalancer.

What we need is a way to expose our web page to port 80 and route traffic to our container using a containerized load balancer like nginx or HAProxy.  That is what ingress gives us.

I searched for several days on how to configure this correctly and here are the steps you can take to get Ingress installed and configured.

Install an Ingress Controller

The first thing we need to do is configure an ingress controller.  The config file is kind of long so I decided to post it to my GitHub repo.  You can install the ingress controller by running this command:

kubectl create -f https://raw.githubusercontent.com/admintome/ingress-config/master/ingress-controller.yml

This will create a few resources including the ingress controller.

You are now ready to create ingress resources for your applications.  The last section will deploy a sample application and demonstrate the process end-to-end.

Deploying an example application

After you install Kubernetes, you will need to test everything out by deploying a sample application. We will deploy a simple NGINX container as a sample web application.  Run the following commands to get the basic application up.

$ kubectl run nginx --image=nginx --port=80
$ kubectl expose deployment nginx --type=NodePort --name=nginx-service

This will automatically create a pod and a service for our NGINX application.  Now we need to know how to access it:

$ kubectl describe service nginx-service
Name: nginx-service
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: NodePort
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30022/TCP
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

As you can see, we have the application exposed on port 30022.  If you browse to the IP of your Kubernetes Slave and that port number you should see the NGINX Getting Started page.

http://{your kube slave ip}:30022

Again, that is a random port number and yours will almost certainly be different.

So we are able to access our sample application, but it’s not ideal.  I don’t want to have to lookup the port every time I redeploy my application.  I would much rather set a static URL that maps to that random port automatically.  That is why we configured our Ingress Controller earlier.

Creating a simple ingress resource

Lets continue to install kubernetes by creating a simple catch-all ingress resource for our application.  Create a new file called ingress-simple.yml with the following contents

apiVersion: extensions/v1beta1
kind: Ingress
  name: test-ingress
    serviceName: nginx-service
    servicePort: 80

Create the resource with the following command:

$ kubectl create -f ingress-simple.yml

After it is complete, you can browse to the IP or hostname of your Kubernetes slave and get the NGINX Getting Started page:


This is great if we only have one web application that we want to expose on HTTP port 80 but most likely that is not the case.  We need something a little more robust.  This is where the layer 7 routing comes in.

Layer 7 Routing Ingress

In order to complete this section, you will need to have configured your DNS to map to your kubernetes cluster like in the Configure DNS (optional) section of this Install Kubernetes guide.  This sends all DNS lookups matching *.kube.admintome.lab (in my environment) to the IP address of my Kubernetes Slave.

First we need to get rid of the simple ingress we created in the last section.

$ kubectl delete -f simple-ingress.yml

Next create a new file called named-ingress.yml with the following contents:

apiVersion: extensions/v1beta1
kind: Ingress
  name: mynginx-ingress
    nginx.ingress.kubernetes.io/rewrite-target: /
  - host: nginx.kube.admintome.lab
      - path: /
          serviceName: nginx-service
          servicePort: 80

This will map any hostnames matching http://nginx.kube.admintome.lab to our nginx-service.  So if you browsed to this URL, you would again see our NGINX Getting Started page.


This is the best way to reach our applications that are deployed to our Kubernetes Cluster.

Multiple Application Ingress

To add other applications you simply update the named-ingress file to match a path and specify your serviceName and port.

apiVersion: extensions/v1beta1
kind: Ingress
  name: test
    nginx.ingress.kubernetes.io/rewrite-target: /
  - host: nginx.kube.admintome.lab
      - path: /app1
          serviceName: nginx-service
          servicePort: 80
      - path: /app2
          serviceName: nginx-service2
          servicePort: 80

Notice that the path changes for each app?  To access the first application you would use this URL:


and to access the second application:


This is provided you deployed the second application with a service name of nginx-service2.

Need Kubernetes Training?

Checkout these awesome Kubernetes courses from Pluralsight.   Pluralsight gives you outstanding training at a great price.  I use Pluralsight every time I need to learn something new.


You have now learned how to install Kubernetes on your bare metal Ubuntu 18.04 servers including the Kubernetes Dashboard and functioning Ingress to expose your deployments.

I hope that this article on how to install kubernetes was a huge help to you in getting Kubernetes deployed.  If you found it helpful, please leave a comment below. Also be sure to sign up for my newsletter where you will get weekly updates on my latest articles and special content only for subscribers.

If you like to see more of my articles on Kubernetes click here.

If you enjoyed this post, please consider donating to AdminTome Blog below. Thanks again for reading!

1 thought on “Easily Install Kubernetes on Ubuntu Step by Step

Leave a Comment

you're currently offline