kubernetes and docker
In this post, I’ll be deploying a docker cluster running a number of nginx containers. Bare with me though as I am still learning about kubernetes and docker.
Machines
The master node will be controlling the worker nodes. Deployments will not run on the master node itself. From my experience, at least 1.5Gi of memory is needed for these machines.
k8s-ms | 10.0.0.165
k8s-wk1 | 10.0.0.44
k8s-wk2 | 10.0.0.58
Prereq work
- disable selinux
- populate hosts files
Installing the yum repo
Create k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Installing the master node
yum -y install kubeadm docker
systemctl enable docker
systemctl start docker
systemctl enable kubelet
systemctl start kubelet
kubeadm init --apiserver-advertise-address 10.0.0.165
Create a kube config so kubectl knows where to connect to the master
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
Install a pod network. I’ll be using the weave net plugin. There are a number of others – https://kubernetes.io/docs/concepts/cluster-administration/addons/
sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Installing the worker nodes
Create the same yum repo file and then install the packages
yum -y install kubeadm docker
systemctl enable docker
systemctl start docker
systemctl enable kubelet
Join the workers to the master node
kubeadm join 10.0.0.165:6443 --token c0yx4v.fw4b6w6mpfolnbf4 --discovery-token-ca-cert-hash sha256:36bc22f4b41f8c6a18f0e5bdd171979b54b22ebfaa91f99e70506984ca4faf48
Listing nodes from the master
I now see all 3 nodes in the cluster
[[email protected] ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-ms Ready master 15m v1.10.2
k8s-wk1 NotReady <none> 16s v1.10.2
k8s-wk2 Ready <none> 1m v1.10.2
Create an Nginx deployment
Deploy image
[[email protected] ~]# docker pull nginx
[[email protected] ~]# kubectl run nginx --image=nginx
deployment.apps "nginx" created
[[email protected] ~]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 1 10s
Go through the worker nodes and I found 1 instance of nginx running on k8s-wk2.
Then create a service and expose port 80 on the master node’s public ip
[[email protected] ~]# kubectl expose deployment nginx --name=nginx-lb --external-ip=10.0.0.165 --port=80 --target-port=80 --type=LoadBalancer
service "nginx-lb" exposed
[[email protected] ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25m
nginx-lb LoadBalancer 10.103.218.95 10.0.0.165 80:32450/TCP 7s
Scale out the deployment
[[email protected] ~]# kubectl scale deployment nginx --replicas=4
deployment.extensions "nginx" scaled
[[email protected] ~]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 4 4 4 4 2m
[[email protected] ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-57664d6d66-9vs2h 1/1 Running 1 22m 10.44.0.3 k8s-wk1
nginx-deployment-57664d6d66-bznch 1/1 Running 1 22m 10.44.0.4 k8s-wk1
nginx-deployment-57664d6d66-m6b2h 1/1 Running 0 22m 10.36.0.1 k8s-wk2
nginx-deployment-57664d6d66-xxs56 1/1 Running 0 22m 10.36.0.2 k8s-wk2
See the above in action
To quickly test the deployment, I go to my worker instances, locate the index.html file of the containers, and edit them. I’ve replaced the default landing page with text that can uniquely identify the containers. Then run a http GET on the load balancer IP and I see responses from different containers:
[[email protected] ~]# curl http://10.0.0.165
wk2-inst2
[[email protected] ~]# curl http://10.0.0.165
wk2-inst2
[[email protected] ~]# curl http://10.0.0.165
wk1-inst2
[[email protected] ~]# curl http://10.0.0.165
wk1-inst2
[[email protected] ~]# curl http://10.0.0.165
wk1-inst1
[[email protected] ~]# curl http://10.0.0.165
wk2-inst2
Make nginx serve a docroot from the local fs
Here the previous deployment and service will be removed. New ones will be deployed with a spec file. The nginx will also be configured to serve a local directory as docroot instead of the default /usr/share/nginx/html.
On each worker machine, create a /var/sites/default
. On the master node, create a deployment spec called nginx-deployment.yml.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
run: web
replicas: 2
template:
metadata:
labels:
run: web
spec:
containers:
- image: nginx
name: nginx-localroot
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginx-root
volumes:
- name: nginx-root
hostPath:
# directory location on host
path: /var/sites/default
type: Directory
---
apiVersion: v1
kind: Service
metadata:
labels:
run: web
name: nginx-lb
spec:
ports:
- port: 80
protocol: TCP
externalIPs:
- 10.0.0.165
selector:
run: web
type: LoadBalancer
Delete the previous deployment and then deploy a new one with the yml file.
[[email protected] ~]# kubectl delete deployment nginx
[[email protected] ~]# kubectl delete service nginx-lb
[[email protected] ~]# kubectl create -f nginx-deployment.yml
On each worker mode, create an index.html under /var/sites/default and test it from the master node
[[email protected] ~]# curl http://10.0.0.165
Hello this is k8s-wk2
[[email protected] ~]# curl http://10.0.0.165
Hello this is k8s-wk1
Dump service in yaml
This is one way to see how to write an yaml based on existing service.
kubectl get svc nginx-lb -o yaml
References
- https://blog.sourcerer.io/a-kubernetes-quick-start-for-people-who-know-just-enough-about-docker-to-get-by-71c5933b4633
- https://blog.alexellis.io/kubernetes-in-10-minutes/