CONFIGURE KUBERNETES ON COREOS | PART 2

CONFIGURE KUBERNETES ON COREOS | PART 2

(Last Updated On: October 3, 2018)

In the last post I covered how to configure an ETCD2 server and a Kubernetes master server. This post will wrap things up by describing how to configure the Kubernetes on CoreOS by configuring theworker server and deploying a couple applications on our new cluster.

If you havn’t read the first post yet, then go through it because all of the steps in this post depend on the steps in that post being completed.

Cloud-Config Files (cont)

Kubernetes Worker Server Cloud-Config

We begin with the cloud-config file for our kuberentes worker server.

~/kubernetes-cluster/kub2/user-data

#cloud-config

ssh_authorized_keys:
        - "ssh-rsa AAAAB3Nz..."
hostname: "kub2"

coreos:
 flannel:
  etcd_endpoints: http://192.168.1.11:2379
  interface: 192.168.1.13
 units:
  - name: kubelet.service
    content: |
      [Service]
      ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests

      Environment=KUBELET_VERSION=v1.4.0-beta.5_coreos.0
      ExecStart=/usr/lib/coreos/kubelet-wrapper \
        --api-servers=https://192.168.1.12 \
        --network-plugin-dir=/etc/kubernetes/cni/net.d \
        --network-plugin= \
        --register-node=true \
        --allow-privileged=true \
        --config=/etc/kubernetes/manifests \
        --hostname-override=192.168.1.13 \
        --cluster-dns=10.3.0.10 \
        --cluster-domain=cluster.local \
        --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
        --tls-cert-file=/etc/kubernetes/ssl/worker.pem \
        --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
      Restart=always
      RestartSec=10
      [Install]
      WantedBy=multi-user.target
  - name: systemd-networkd.service
    command: stop
  - name: 00-eth0.network
    runtime: true
    content: |
      [Match]
      Name=eth0

      [Network]
      DNS=8.8.8.8
      Address=192.168.1.13
      Gateway=192.168.1.1
  - name: down-interfaces.service
    command: start
    content: |
      [Service]
      Type=oneshot
      ExecStart=/usr/bin/ip link set eth0 down
      ExecStart=/usr/bin/ip addr flush dev eth0
  - name: systemd-networkd.service
    command: restart

write_files:
  - path: "/etc/kubernetes/manifests/kube-proxy.yaml"
    content: |
      apiVersion: v1
      kind: Pod
      metadata:
        name: kube-proxy
        namespace: kube-system
      spec:
        hostNetwork: true
        containers:
        - name: kube-proxy
          image: quay.io/coreos/hyperkube:v1.3.4_coreos.0
          command:
          - /hyperkube
          - proxy
          - --master=https://192.168.1.12
          - --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
          - --proxy-mode=iptables
          securityContext:
            privileged: true
          volumeMounts:
            - mountPath: /etc/ssl/certs
              name: "ssl-certs"
            - mountPath: /etc/kubernetes/worker-kubeconfig.yaml
              name: "kubeconfig"
              readOnly: true
            - mountPath: /etc/kubernetes/ssl
              name: "etc-kube-ssl"
              readOnly: true
        volumes:
          - name: "ssl-certs"
            hostPath:
              path: "/usr/share/ca-certificates"
          - name: "kubeconfig"
            hostPath:
              path: "/etc/kubernetes/worker-kubeconfig.yaml"
          - name: "etc-kube-ssl"
            hostPath:
              path: "/etc/kubernetes/ssl"
  - path: "/etc/kubernetes/worker-kubeconfig.yaml"
    content: |
      apiVersion: v1
      kind: Config
      clusters:
      - name: local
        cluster:
          certificate-authority: /etc/kubernetes/ssl/ca.pem
      users:
      - name: kubelet
        user:
          client-certificate: /etc/kubernetes/ssl/worker.pem
          client-key: /etc/kubernetes/ssl/worker-key.pem
      contexts:
      - context:
          cluster: local
          user: kubelet
        name: kubelet-context
      current-context: kubelet-context

This cloud-config is pretty similiar to our master server. We configure flanneld to communicate. We setup the kubelet systemd service. We also create a couple of kubelet manifest files that will configure the kube-proxy and worker-kubeconfigkubelet manifest files.

Start your server with the listed cloud-config file then login. We now need to copy the TLS keys again much like we did for the master server.

Copy the TLS keys

SCP the keys.tar.gz file to the server.

$ scp -i <your ssh private key> keys.tar.gz [email protected]:/home/core

Login and move the keys to the root user’s home.

$ sudo cp keys.tar.gz /root
$ sudo su -
# tar -xzvf keys.tar.gz
# cd certs

Copy the keys we will need to the right directory. Ensure you replace kub1.billcloud.local to your actual FQDN of your worker server.

# mkdir -p /etc/kubernetes/ssl
# cp ca.pem /etc/kubernetes/ssl
# cp kub2.billcloud.local-worker.pem /etc/kubernetes/ssl
# cp kub2.billcloud.local-worker-key.pem /etc/kubernetes/ssl

And set the proper permissions on the private key

# chmod 600 /etc/kubernetes/ssl/*-key.pem
# chown root:root /etc/kubernetes/ssl/*-key.pem
# cd /etc/kubernetes/ssl/
# ln -s ${WORKER_FQDN}-worker.pem worker.pem
# ln -s ${WORKER_FQDN}-worker-key.pem worker-key.pem

Load our changed systemd units

# systemctl daemon-reload

Start the services

# systemctl start flanneld
# systemctl start kubelet

Enable the services to start upon reboot:

# systemctl enable flanneld
# systemctl enable kubelet

Verify that kubelet is running

# systemctl status kubelet

Configure kubectl

Log out of the server. The following commands will need to be run on your local development system.

Download the kubectl binary.

$ curl -O https://storage.googleapis.com/kubernetes-release/release/v1.3.4/bin/linux/amd64/kubectl
$ chmod +x kubectl
$ mv kubectl /usr/local/bin/kubectl

Configuring kubectl

We now need to configure kubectl so that we can manage our kubernetes cluster. Make sure to run these commands from the same directory as the TLS certs that you generated in the last post.

$ kubectl config set-cluster default-cluster --server=https://192.168.1.12 --certificate-authority=ca.pem
$ kubectl config set-credentials default-admin --certificate-authority=ca.pem --client-key=admin-key.pem --client-certificate=admin.pem
$ kubectl config set-context default-system --cluster=default-cluster --user=default-admin
$ kubectl config use-context default-system

Verify kubectl configuration

$ kubectl get nodes
NAME           STATUS                     AGE
192.168.1.12   Ready,SchedulingDisabled   16h
192.168.1.13   Ready                      15h

Show the system pods:

$ kubectl get pods --namespace=kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
kube-apiserver-192.168.1.12             1/1       Running   1          16h
kube-controller-manager-192.168.1.12    1/1       Running   1          16h
kube-proxy-192.168.1.12                 1/1       Running   1          16h
kube-proxy-192.168.1.13                 1/1       Running   0          15h
kube-scheduler-192.168.1.12             1/1       Running   1          16h

This shows that everything is working as expected. The last bit of configuration we need to perform is adding in a couple of addins.

Configuring kubernetes Add-ons

DNS Add-on

Create a new file named dns-addon.yml with the following contents:

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.3.0.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

---

apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v11
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    version: v11
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    k8s-app: kube-dns
    version: v11
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v11
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: etcd
        image: gcr.io/google_containers/etcd-amd64:2.2.1
        resources:
          limits:
            cpu: 100m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 50Mi
        command:
        - /usr/local/bin/etcd
        - -data-dir
        - /var/etcd/data
        - -listen-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -advertise-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -initial-cluster-token
        - skydns-etcd
        volumeMounts:
        - name: etcd-storage
          mountPath: /var/etcd/data
      - name: kube2sky
        image: gcr.io/google_containers/kube2sky:1.14
        resources:
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 50Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        args:
        # command = "/kube2sky"
        - --domain=cluster.local
      - name: skydns
        image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
        resources:
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 50Mi
        args:
        # command = "/skydns"
        - -machines=http://127.0.0.1:4001
        - -addr=0.0.0.0:53
        - -ns-rotate=false
        - -domain=cluster.local.
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
      - name: healthz
        image: gcr.io/google_containers/exechealthz:1.0
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
        - -port=8080
        ports:
        - containerPort: 8080
          protocol: TCP
      volumes:
      - name: etcd-storage
        emptyDir: {}
      dnsPolicy: Default

Create the add-on

$ kubectl create -f dns-addon.yml

Verify that the DNS add-on starts. It might take a few mins to get going but you should see output like the following when you check the pods:

$ kubectl get pods --namespace=kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
kube-apiserver-192.168.1.12             1/1       Running   0          2d
kube-controller-manager-192.168.1.12    1/1       Running   0          2d
kube-dns-v11-z3zaq                      4/4       Running   13         1d
kube-proxy-192.168.1.12                 1/1       Running   0          1d
kube-proxy-192.168.1.13                 1/1       Running   4          1d
kube-proxy-192.168.1.14                 1/1       Running   2          1d
kube-scheduler-192.168.1.12             1/1       Running   0          2d

$ kubectl logs kube-dns-v11-z3zaq -c skydns --namespace=kube-system
2016/09/25 03:44:38 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns/config) [1099]
2016/09/25 03:44:38 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0]
2016/09/25 03:44:38 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0]

Kubernetes Dashboard

We now have a DNS service for our Kubernetes cluster. Our last step for this article is to add the Kubernetes Dashboard to our cluster. We simply install it using our kubectl command on our dev system just like we did with DNS.

$ kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

Note that since this guide is how to install everything on bare metal we don’t have a loadbalancer installed and therefore can’t just browse to https://kubernetes-master-ip/ui and get our dashboard. In the next post in this series we will walk through adding this functionality among other things. For now kubectl will actually tell us how to access our dashboard:

deployment "kubernetes-dashboard" created
You have exposed your service on an external port on all nodes in your
cluster.  If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32548) to serve traffic.

See http://releases.k8s.io/release-1.2/docs/user-guide/services-firewalls.md for more details.
service "kubernetes-dashboard" created

Notice the part that says “service port(s) (tcp:32548)“? That tells us that we can access our dashboard using out worker IP and that port: http://192.168.1.13:32548.

NOTE: Your port number may be different than shown here. Check your logs to know for sure.

If you browse to this URL you will see our Kuberentes Dashboard.

Success! Our Kurbenetes installation is almost complete. We just have a few more things to do like configuring keepalived-vip (for loadbalancing), configuring persistant storage, setting up monitoring, etc. That will be for the next post in this series.

Click here for more great articles from AdminTome Blog.

Need Kubernetes Training?

Checkout these awesome Kubernetes courses from Pluralsight.   Pluralsight gives you outstanding training at a great price.  I use Pluralsight every time I need to learn something new.

Conclusion

I hope you have enjoyed this article, if so please leave a comment below.  For more articles, please signup for the AdminTome Blog below.  Also please feel free to share the article to your friends using the buttons to the left.  Thanks again for reading this post.

 

Leave a Comment

you're currently offline