lost and found ( for me ? )

Set up kubernetes and Weave net with kubeadm on Ubuntu 16.04

Here are logs when setting up kubernetes and weave net with kubeadm.

there are three nodes, node01, node02 and node03.

OS is xenial(ubuntu 16.04)

node01 is a master.

Before setting up kubernetes with kubeadm.

at least add two vCPUs on the master.
When master had single vCPU, it failed to create a pod network(weave net) due to out of cpu as below.
I saw the following errors when the master had single vCPU.

ubuntu@node01:~$ kubectl get pods --all-namespaces
kube-system   kube-dns-2924299975-htqzs         0/4       OutOfcpu   0          49m
kube-system   kube-dns-2924299975-jr3f2         0/4       OutOfcpu   0          49m

All nodes are running as VMs within KVM.

node01 : vCPU*2, memory 2G, 1 NIC(ens3)
node02 : vCPU*1, memory 1G, 1 NIC(ens3)
node03 : vCPU*1, memory 1G, 1 NIC(ens3)

[ set up kubernetes with kubeadm ]

on the master, initialize kubernetes cluster.

specify --api-advertise-addresses.
In my case, it failed to create a pod network(weave net) when I did not specify this option.

ubuntu@node01:~$ sudo kubeadm init --api-advertise-addresses=
kubeadm join --token=0cfdd7.66786e47c27ec276

ubuntu@node01:~$ sudo kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created

ubuntu@node01:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-4167c            1/1       Running   0          22m
kube-system   etcd-node01                       1/1       Running   0          20m
kube-system   kube-apiserver-node01             1/1       Running   0          22m
kube-system   kube-controller-manager-node01    1/1       Running   0          21m
kube-system   kube-discovery-1769846148-qrrgb   1/1       Running   0          22m
kube-system   kube-dns-2924299975-1jrx2         4/4       Running   0          21m
kube-system   kube-proxy-6w7cv                  1/1       Running   0          21m
kube-system   kube-scheduler-node01             1/1       Running   0          21m
kube-system   weave-net-xvr5v                   2/2       Running   0          1m

on the node02, join the cluster.

root@node02:~# kubeadm join --token=0cfdd7.66786e47c27ec276

on the node03, join the cluster.

ubuntu@node03:~$ sudo kubeadm join --token=0cfdd7.66786e47c27ec276

on the master

ubuntu@node01:~$ kubectl get nodes
NAME      STATUS         AGE
node01    Ready,master   26m
node02    Ready          54s
node03    Ready          3s

install sample application

ubuntu@node01:~$ kubectl create namespace sock-shop
namespace "sock-shop" created

ubuntu@node01:~$ kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true" --validate=false

ubuntu@node01:~$ kubectl get svc -n sock-shop
NAME           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
cart    <none>        80/TCP         2m
cart-db     <none>        27017/TCP      2m
catalogue   <none>        80/TCP         2m
catalogue-db     <none>        3306/TCP       2m
front-end   <nodes>       80:31500/TCP   2m
orders   <none>        80/TCP         2m
orders-db    <none>        27017/TCP      2m
payment     <none>        80/TCP         2m
queue-master     <none>        80/TCP         2m
rabbitmq      <none>        5672/TCP       2m
shipping    <none>        80/TCP         2m
user      <none>        80/TCP         2m
user-db    <none>        27017/TCP      2m

ubuntu@node01:~$ kubectl describe svc front-end -n sock-shop
Name: front-end
Namespace: sock-shop
Labels: name=front-end
Selector: name=front-end
Type: NodePort
Port: <unset> 80/TCP
NodePort: <unset> 31500/TCP
Endpoints: <none>
Session Affinity: None
No events.

access to http://<master ip>:31500
in my environment, master ip is

ubuntu@node01:~$ ip -4 a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
   inet scope host lo
      valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
   inet brd scope global ens3
      valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
   inet scope global docker0
      valid_lft forever preferred_lft forever
6: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1410 qdisc noqueue state UP group default qlen 1000
   inet scope global weave
      valid_lft forever preferred_lft forever

ubuntu@node01:~$ kubectl get pods -n sock-shop
NAME                            READY     STATUS    RESTARTS   AGE
cart-2630143515-bgwx2           1/1       Running   0          7m
cart-db-2053818980-11vbp        1/1       Running   0          7m
catalogue-1271079145-smcw2      1/1       Running   0          7m
catalogue-db-2196966982-kmsgt   1/1       Running   0          7m
front-end-2250085842-6qqls      1/1       Running   0          7m
orders-2938753226-nl8jg         1/1       Running   0          7m
orders-db-3277638702-svg0x      1/1       Running   0          7m
payment-2773294789-mfztx        1/1       Running   0          7m
queue-master-1190579278-wt59p   1/1       Running   0          7m
rabbitmq-3472039365-7xdll       1/1       Running   0          7m
shipping-492753731-wtmpr        1/1       Running   0          7m
user-3917232181-nhm5r           1/1       Running   0          7m
user-db-327013678-2020l         1/1       Running   0          7m

ubuntu@node01:~$ kubectl get pods -n kube-system
NAME                              READY     STATUS    RESTARTS   AGE
dummy-2088944543-4167c            1/1       Running   0          40m
etcd-node01                       1/1       Running   0          39m
kube-apiserver-node01             1/1       Running   0          40m
kube-controller-manager-node01    1/1       Running   0          39m
kube-discovery-1769846148-qrrgb   1/1       Running   0          40m
kube-dns-2924299975-1jrx2         4/4       Running   0          40m
kube-proxy-1z7cb                  1/1       Running   0          15m
kube-proxy-6w7cv                  1/1       Running   0          40m
kube-proxy-skd70                  1/1       Running   0          14m
kube-scheduler-node01             1/1       Running   0          40m
weave-net-7db4l                   2/2       Running   0          14m
weave-net-n6w3l                   2/2       Running   1          15m
weave-net-xvr5v                   2/2       Running   0          20m

[ dashboard ]


root@node01:~# kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created

root@node01:~# kubectl get svc --all-namespaces | grep dashboard
kube-system   kubernetes-dashboard    <nodes>       80:30998/TCP    51s

root@node01:~# kubectl get svc -n kube-system kubernetes-dashboard
NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   <nodes>       80:30998/TCP   1m

access to dashboard

[ weave scope ( cloud ) ]

Get a token by creating an account at  weave cloud.

root@node01:~# kubectl apply -f 'https://cloud.weave.works/launch/k8s/weavescope.yaml?service-token=<token>'
daemonset "weave-scope-agent" created

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.