lost and found ( for me ? )

Set up kubernetes and Weave net with kubeadm on Ubuntu 16.04

Here are logs when setting up kubernetes and weave net with kubeadm.

there are three nodes, node01, node02 and node03.

OS is xenial(ubuntu 16.04)

node01 is a master.

Before setting up kubernetes with kubeadm.

at least add two vCPUs on the master.
When master had single vCPU, it failed to create a pod network(weave net) due to out of cpu as below.
I saw the following errors when the master had single vCPU.

ubuntu@node01:~$ kubectl get pods --all-namespaces
kube-system   kube-dns-2924299975-htqzs         0/4       OutOfcpu   0          49m
kube-system   kube-dns-2924299975-jr3f2         0/4       OutOfcpu   0          49m

All nodes are running as VMs within KVM.

node01 : vCPU*2, memory 2G, 1 NIC(ens3) 10.14.0.10
node02 : vCPU*1, memory 1G, 1 NIC(ens3) 10.14.0.11
node03 : vCPU*1, memory 1G, 1 NIC(ens3) 10.14.0.12

[ set up kubernetes with kubeadm ]

on the master, initialize kubernetes cluster.

specify --api-advertise-addresses.
In my case, it failed to create a pod network(weave net) when I did not specify this option.

ubuntu@node01:~$ sudo kubeadm init --api-advertise-addresses=10.14.0.10
<snip>
kubeadm join --token=0cfdd7.66786e47c27ec276 10.14.0.10

ubuntu@node01:~$ sudo kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created

ubuntu@node01:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-4167c            1/1       Running   0          22m
kube-system   etcd-node01                       1/1       Running   0          20m
kube-system   kube-apiserver-node01             1/1       Running   0          22m
kube-system   kube-controller-manager-node01    1/1       Running   0          21m
kube-system   kube-discovery-1769846148-qrrgb   1/1       Running   0          22m
kube-system   kube-dns-2924299975-1jrx2         4/4       Running   0          21m
kube-system   kube-proxy-6w7cv                  1/1       Running   0          21m
kube-system   kube-scheduler-node01             1/1       Running   0          21m
kube-system   weave-net-xvr5v                   2/2       Running   0          1m

on the node02, join the cluster.

root@node02:~# kubeadm join --token=0cfdd7.66786e47c27ec276 10.14.0.10

on the node03, join the cluster.

ubuntu@node03:~$ sudo kubeadm join --token=0cfdd7.66786e47c27ec276 10.14.0.10


on the master

ubuntu@node01:~$ kubectl get nodes
NAME      STATUS         AGE
node01    Ready,master   26m
node02    Ready          54s
node03    Ready          3s

install sample application

ubuntu@node01:~$ kubectl create namespace sock-shop
namespace "sock-shop" created

ubuntu@node01:~$ kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true" --validate=false

ubuntu@node01:~$ kubectl get svc -n sock-shop
NAME           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
cart           10.100.226.122   <none>        80/TCP         2m
cart-db        10.104.2.181     <none>        27017/TCP      2m
catalogue      10.103.204.157   <none>        80/TCP         2m
catalogue-db   10.108.4.214     <none>        3306/TCP       2m
front-end      10.100.255.194   <nodes>       80:31500/TCP   2m
orders         10.109.172.211   <none>        80/TCP         2m
orders-db      10.102.129.40    <none>        27017/TCP      2m
payment        10.106.32.31     <none>        80/TCP         2m
queue-master   10.109.87.11     <none>        80/TCP         2m
rabbitmq       10.96.4.246      <none>        5672/TCP       2m
shipping       10.102.231.77    <none>        80/TCP         2m
user           10.106.4.159     <none>        80/TCP         2m
user-db        10.109.48.189    <none>        27017/TCP      2m
ubuntu@node01:~$


ubuntu@node01:~$ kubectl describe svc front-end -n sock-shop
Name: front-end
Namespace: sock-shop
Labels: name=front-end
Selector: name=front-end
Type: NodePort
IP: 10.100.255.194
Port: <unset> 80/TCP
NodePort: <unset> 31500/TCP
Endpoints: <none>
Session Affinity: None
No events.
ubuntu@node01:~$

access to http://<master ip>:31500
in my environment, master ip is 10.14.0.10


ubuntu@node01:~$ ip -4 a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
   inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
   inet 10.14.0.10/24 brd 10.14.0.255 scope global ens3
      valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
   inet 172.17.0.1/16 scope global docker0
      valid_lft forever preferred_lft forever
6: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1410 qdisc noqueue state UP group default qlen 1000
   inet 10.32.0.1/12 scope global weave
      valid_lft forever preferred_lft forever
ubuntu@node01:~$

ubuntu@node01:~$ kubectl get pods -n sock-shop
NAME                            READY     STATUS    RESTARTS   AGE
cart-2630143515-bgwx2           1/1       Running   0          7m
cart-db-2053818980-11vbp        1/1       Running   0          7m
catalogue-1271079145-smcw2      1/1       Running   0          7m
catalogue-db-2196966982-kmsgt   1/1       Running   0          7m
front-end-2250085842-6qqls      1/1       Running   0          7m
orders-2938753226-nl8jg         1/1       Running   0          7m
orders-db-3277638702-svg0x      1/1       Running   0          7m
payment-2773294789-mfztx        1/1       Running   0          7m
queue-master-1190579278-wt59p   1/1       Running   0          7m
rabbitmq-3472039365-7xdll       1/1       Running   0          7m
shipping-492753731-wtmpr        1/1       Running   0          7m
user-3917232181-nhm5r           1/1       Running   0          7m
user-db-327013678-2020l         1/1       Running   0          7m
ubuntu@node01:~$

ubuntu@node01:~$ kubectl get pods -n kube-system
NAME                              READY     STATUS    RESTARTS   AGE
dummy-2088944543-4167c            1/1       Running   0          40m
etcd-node01                       1/1       Running   0          39m
kube-apiserver-node01             1/1       Running   0          40m
kube-controller-manager-node01    1/1       Running   0          39m
kube-discovery-1769846148-qrrgb   1/1       Running   0          40m
kube-dns-2924299975-1jrx2         4/4       Running   0          40m
kube-proxy-1z7cb                  1/1       Running   0          15m
kube-proxy-6w7cv                  1/1       Running   0          40m
kube-proxy-skd70                  1/1       Running   0          14m
kube-scheduler-node01             1/1       Running   0          40m
weave-net-7db4l                   2/2       Running   0          14m
weave-net-n6w3l                   2/2       Running   1          15m
weave-net-xvr5v                   2/2       Running   0          20m
ubuntu@node01:~$


[ dashboard ]

https://github.com/kubernetes/dashboard#kubernetes-dashboard

root@node01:~# kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
root@node01:~#

root@node01:~# kubectl get svc --all-namespaces | grep dashboard
kube-system   kubernetes-dashboard   10.111.185.63    <nodes>       80:30998/TCP    51s

root@node01:~# kubectl get svc -n kube-system kubernetes-dashboard
NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   10.111.185.63   <nodes>       80:30998/TCP   1m
root@node01:~#


access to dashboard

http://10.14.0.10:30998/



[ weave scope ( cloud ) ]


Get a token by creating an account at  weave cloud.

root@node01:~# kubectl apply -f 'https://cloud.weave.works/launch/k8s/weavescope.yaml?service-token=<token>'
daemonset "weave-scope-agent" created
root@node01:~#

Ubuntu 16.04 LXD : install wireshark within CentOS based containers

I could install wireshark within Ubuntu-based LXD container by default configuration, but could not install wireshark on CentOS-based containers until I set “security.privileged” to true.

Here is what I did.

start a Cent7 container
$ lxc launch 8c7eed37f93c cent7-01

install wireshark
failed to install wireshark.
$ lxc exec cent7-01 bash
[root@cent7-01 ~]# yum install wireshark –y

Dependency Installed:
 c-ares.x86_64 0:1.10.0-3.el7 gnutls.x86_64 0:3.3.24-1.el7   libpcap.x86_64 14:1.5.3-8.el7 libsmi.x86_64 0:0.4.8-13.el7
 nettle.x86_64 0:2.7.1-8.el7  trousers.x86_64 0:0.3.13-1.el7 wget.x86_64 0:1.14-13.el7

Failed:
 wireshark.x86_64 0:1.10.14-10.el7

Complete!
[root@cent7-01 ~]#

stop the conainer and set security.privileged to true.
$ lxc stop cent7-01

$ lxc config set cent7-01 security.privileged true

$ lxc config show cent7-01 | grep security
 security.privileged: "true"

start the container and install wireshark
$ lxc start cent7-01

$ lxc exec cent7-01 -- yum install wireshark -y
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: www.ftp.ne.jp
* extras: www.ftp.ne.jp
* updates: www.ftp.ne.jp
Resolving Dependencies
--> Running transaction check
---> Package wireshark.x86_64 0:1.10.14-10.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================================================================
Package                      Arch                      Version                              Repository               Size
===========================================================================================================================
Installing:
wireshark                    x86_64                    1.10.14-10.el7                       base                     13 M

Transaction Summary
===========================================================================================================================
Install  1 Package

Total download size: 13 M
Installed size: 67 M
Downloading packages:
wireshark-1.10.14-10.el7.x86_64.rpm                                                                 |  13 MB  00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
 Installing : wireshark-1.10.14-10.el7.x86_64                                                                         1/1
 Verifying  : wireshark-1.10.14-10.el7.x86_64                                                                         1/1

Installed:
 wireshark.x86_64 0:1.10.14-10.el7

Complete!

You can also set “security.privileged true” to a profile as below.
$ lxc profile set default security.privileged true

$ lxc profile show default
name: default
config:
 security.privileged: "true"
description: Default LXD profile
devices:
 eth0:
   name: eth0
   nictype: bridged
   parent: lxdbr0
   type: nic