lost and found ( for me ? )

Set up kubernetes and Weave net with kubeadm on Ubuntu 16.04

Here are logs when setting up kubernetes and weave net with kubeadm.

there are three nodes, node01, node02 and node03.

OS is xenial(ubuntu 16.04)

node01 is a master.

Before setting up kubernetes with kubeadm.

at least add two vCPUs on the master.
When master had single vCPU, it failed to create a pod network(weave net) due to out of cpu as below.
I saw the following errors when the master had single vCPU.

ubuntu@node01:~$ kubectl get pods --all-namespaces
kube-system   kube-dns-2924299975-htqzs         0/4       OutOfcpu   0          49m
kube-system   kube-dns-2924299975-jr3f2         0/4       OutOfcpu   0          49m

All nodes are running as VMs within KVM.

node01 : vCPU*2, memory 2G, 1 NIC(ens3)
node02 : vCPU*1, memory 1G, 1 NIC(ens3)
node03 : vCPU*1, memory 1G, 1 NIC(ens3)

[ set up kubernetes with kubeadm ]

on the master, initialize kubernetes cluster.

specify --api-advertise-addresses.
In my case, it failed to create a pod network(weave net) when I did not specify this option.

ubuntu@node01:~$ sudo kubeadm init --api-advertise-addresses=
kubeadm join --token=0cfdd7.66786e47c27ec276

ubuntu@node01:~$ sudo kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created

ubuntu@node01:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-4167c            1/1       Running   0          22m
kube-system   etcd-node01                       1/1       Running   0          20m
kube-system   kube-apiserver-node01             1/1       Running   0          22m
kube-system   kube-controller-manager-node01    1/1       Running   0          21m
kube-system   kube-discovery-1769846148-qrrgb   1/1       Running   0          22m
kube-system   kube-dns-2924299975-1jrx2         4/4       Running   0          21m
kube-system   kube-proxy-6w7cv                  1/1       Running   0          21m
kube-system   kube-scheduler-node01             1/1       Running   0          21m
kube-system   weave-net-xvr5v                   2/2       Running   0          1m

on the node02, join the cluster.

root@node02:~# kubeadm join --token=0cfdd7.66786e47c27ec276

on the node03, join the cluster.

ubuntu@node03:~$ sudo kubeadm join --token=0cfdd7.66786e47c27ec276

on the master

ubuntu@node01:~$ kubectl get nodes
NAME      STATUS         AGE
node01    Ready,master   26m
node02    Ready          54s
node03    Ready          3s

install sample application

ubuntu@node01:~$ kubectl create namespace sock-shop
namespace "sock-shop" created

ubuntu@node01:~$ kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true" --validate=false

ubuntu@node01:~$ kubectl get svc -n sock-shop
NAME           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
cart    <none>        80/TCP         2m
cart-db     <none>        27017/TCP      2m
catalogue   <none>        80/TCP         2m
catalogue-db     <none>        3306/TCP       2m
front-end   <nodes>       80:31500/TCP   2m
orders   <none>        80/TCP         2m
orders-db    <none>        27017/TCP      2m
payment     <none>        80/TCP         2m
queue-master     <none>        80/TCP         2m
rabbitmq      <none>        5672/TCP       2m
shipping    <none>        80/TCP         2m
user      <none>        80/TCP         2m
user-db    <none>        27017/TCP      2m

ubuntu@node01:~$ kubectl describe svc front-end -n sock-shop
Name: front-end
Namespace: sock-shop
Labels: name=front-end
Selector: name=front-end
Type: NodePort
Port: <unset> 80/TCP
NodePort: <unset> 31500/TCP
Endpoints: <none>
Session Affinity: None
No events.

access to http://<master ip>:31500
in my environment, master ip is

ubuntu@node01:~$ ip -4 a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
   inet scope host lo
      valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
   inet brd scope global ens3
      valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
   inet scope global docker0
      valid_lft forever preferred_lft forever
6: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1410 qdisc noqueue state UP group default qlen 1000
   inet scope global weave
      valid_lft forever preferred_lft forever

ubuntu@node01:~$ kubectl get pods -n sock-shop
NAME                            READY     STATUS    RESTARTS   AGE
cart-2630143515-bgwx2           1/1       Running   0          7m
cart-db-2053818980-11vbp        1/1       Running   0          7m
catalogue-1271079145-smcw2      1/1       Running   0          7m
catalogue-db-2196966982-kmsgt   1/1       Running   0          7m
front-end-2250085842-6qqls      1/1       Running   0          7m
orders-2938753226-nl8jg         1/1       Running   0          7m
orders-db-3277638702-svg0x      1/1       Running   0          7m
payment-2773294789-mfztx        1/1       Running   0          7m
queue-master-1190579278-wt59p   1/1       Running   0          7m
rabbitmq-3472039365-7xdll       1/1       Running   0          7m
shipping-492753731-wtmpr        1/1       Running   0          7m
user-3917232181-nhm5r           1/1       Running   0          7m
user-db-327013678-2020l         1/1       Running   0          7m

ubuntu@node01:~$ kubectl get pods -n kube-system
NAME                              READY     STATUS    RESTARTS   AGE
dummy-2088944543-4167c            1/1       Running   0          40m
etcd-node01                       1/1       Running   0          39m
kube-apiserver-node01             1/1       Running   0          40m
kube-controller-manager-node01    1/1       Running   0          39m
kube-discovery-1769846148-qrrgb   1/1       Running   0          40m
kube-dns-2924299975-1jrx2         4/4       Running   0          40m
kube-proxy-1z7cb                  1/1       Running   0          15m
kube-proxy-6w7cv                  1/1       Running   0          40m
kube-proxy-skd70                  1/1       Running   0          14m
kube-scheduler-node01             1/1       Running   0          40m
weave-net-7db4l                   2/2       Running   0          14m
weave-net-n6w3l                   2/2       Running   1          15m
weave-net-xvr5v                   2/2       Running   0          20m

[ dashboard ]


root@node01:~# kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created

root@node01:~# kubectl get svc --all-namespaces | grep dashboard
kube-system   kubernetes-dashboard    <nodes>       80:30998/TCP    51s

root@node01:~# kubectl get svc -n kube-system kubernetes-dashboard
NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   <nodes>       80:30998/TCP   1m

access to dashboard

[ weave scope ( cloud ) ]

Get a token by creating an account at  weave cloud.

root@node01:~# kubectl apply -f 'https://cloud.weave.works/launch/k8s/weavescope.yaml?service-token=<token>'
daemonset "weave-scope-agent" created

Ubuntu 16.04 LXD : install wireshark within CentOS based containers

I could install wireshark within Ubuntu-based LXD container by default configuration, but could not install wireshark on CentOS-based containers until I set “security.privileged” to true.

Here is what I did.

start a Cent7 container
$ lxc launch 8c7eed37f93c cent7-01

install wireshark
failed to install wireshark.
$ lxc exec cent7-01 bash
[root@cent7-01 ~]# yum install wireshark –y

Dependency Installed:
 c-ares.x86_64 0:1.10.0-3.el7 gnutls.x86_64 0:3.3.24-1.el7   libpcap.x86_64 14:1.5.3-8.el7 libsmi.x86_64 0:0.4.8-13.el7
 nettle.x86_64 0:2.7.1-8.el7  trousers.x86_64 0:0.3.13-1.el7 wget.x86_64 0:1.14-13.el7

 wireshark.x86_64 0:1.10.14-10.el7

[root@cent7-01 ~]#

stop the conainer and set security.privileged to true.
$ lxc stop cent7-01

$ lxc config set cent7-01 security.privileged true

$ lxc config show cent7-01 | grep security
 security.privileged: "true"

start the container and install wireshark
$ lxc start cent7-01

$ lxc exec cent7-01 -- yum install wireshark -y
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: www.ftp.ne.jp
* extras: www.ftp.ne.jp
* updates: www.ftp.ne.jp
Resolving Dependencies
--> Running transaction check
---> Package wireshark.x86_64 0:1.10.14-10.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

Package                      Arch                      Version                              Repository               Size
wireshark                    x86_64                    1.10.14-10.el7                       base                     13 M

Transaction Summary
Install  1 Package

Total download size: 13 M
Installed size: 67 M
Downloading packages:
wireshark-1.10.14-10.el7.x86_64.rpm                                                                 |  13 MB  00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
 Installing : wireshark-1.10.14-10.el7.x86_64                                                                         1/1
 Verifying  : wireshark-1.10.14-10.el7.x86_64                                                                         1/1

 wireshark.x86_64 0:1.10.14-10.el7


You can also set “security.privileged true” to a profile as below.
$ lxc profile set default security.privileged true

$ lxc profile show default
name: default
 security.privileged: "true"
description: Default LXD profile
   name: eth0
   nictype: bridged
   parent: lxdbr0
   type: nic