Here are trial and logs when running Kubernetes cluster on CoreOS.
Reference
Clone the repository.
$ cd coreos-kubernetes/multi-node/vagrant/
|
I will run totally 5 CoreOSs as virtual machine, one controller, three working nodes, one etcd
$ cp config.rb.sample config.rb
$ cat config.rb
$update_channel="alpha"
$controller_count=1
$controller_vm_memory=512
$worker_count=3
$worker_vm_memory=1024
$etcd_count=1
$etcd_vm_memory=512
|
Run VMs
$ vagrant up
|
You can learn how vagrant configures each node, controller, worker and etcd.
$ less Vagrantfile
ETCD_CLOUD_CONFIG_PATH = File.expand_path("etcd-cloud-config.yaml")
CONTROLLER_CLOUD_CONFIG_PATH = File.expand_path("../generic/controller-install.sh")
WORKER_CLOUD_CONFIG_PATH = File.expand_path("../generic/worker-install.sh")
|
Etcd : etcd-cloud-config.yaml
Controller : ../generic/controller-install.sh
Worker : ../generic/worker-install.sh
Five VMs are running now.
Etcd : e1
Controller : c1
Working nodes :w1, w2, w3
$ vagrant status
Current machine states:
e1 running (virtualbox)
c1 running (virtualbox)
w1 running (virtualbox)
w2 running (virtualbox)
w3 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
|
You can access to each VM via “vagrant ssh”
$ vagrant ssh c1
Last login: Tue Mar 22 14:04:16 2016 from 10.0.2.2
CoreOS alpha (991.0.0)
Failed Units: 1
update-engine.service
core@c1 ~ $
|
Download kubectl on the Vagrant host so that you can manage clusters on the Vagrant host.
$ curl -O https://storage.googleapis.com/kubernetes-release/release/v1.1.8/bin/linux/amd64/kubectl
$ chmod +x kubectl
$ sudo mv kubectl /usr/local/bin/
|
Configure kubectl
$ export KUBECONFIG="${KUBECONFIG}:$(pwd)/kubeconfig"
$ echo ${KUBECONFIG}
:/media/hattori/logical_vol1/Vagrant_works/CoreOS_kubernetes/coreos-kubernetes/multi-node/vagrant/kubeconfig:/media/hattori/logical_vol1/Vagrant_works/CoreOS_kubernetes/coreos-kubernetes/multi-node/vagrant/kubeconfig
$ kubectl config use-context vagrant-multi
switched to context "vagrant-multi".
|
$ kubectl get nodes
NAME LABELS STATUS AGE
172.17.4.201 kubernetes.io/hostname=172.17.4.201 Ready 16m
172.17.4.202 kubernetes.io/hostname=172.17.4.202 Ready 16m
172.17.4.203 kubernetes.io/hostname=172.17.4.203 Ready 16m
$
|
w1, w2 and w3 have IP address, 172.17.4.201,202, 203.
core@w1 ~ $ ip a sh | grep inet |grep -v inet6
inet 127.0.0.1/8 scope host lo
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
inet 172.17.4.201/24 brd 172.17.4.255 scope global eth1
inet 10.2.71.0/16 scope global flannel.1
inet 10.2.71.1/24 scope global docker0
|
$ kubectl cluster-info
Kubernetes master is running at https://172.17.4.101:443
Heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
InfluxDB is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
|
C1(controller) has 172.17.4.101.
core@c1 ~ $ ip a s | grep inet | grep -v inet6
inet 127.0.0.1/8 scope host lo
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
inet 172.17.4.101/24 brd 172.17.4.255 scope global eth1
inet 10.2.28.0/16 scope global flannel.1
inet 10.2.28.1/24 scope global docker0
|
$ kubectl get
You must specify the type of resource to get. Valid resource types include:
* pods (aka 'po')
* replicationcontrollers (aka 'rc')
* daemonsets (aka 'ds')
* services (aka 'svc')
* events (aka 'ev')
* nodes (aka 'no')
* namespaces (aka 'ns')
* secrets
* persistentvolumes (aka 'pv')
* persistentvolumeclaims (aka 'pvc')
* limitranges (aka 'limits')
* resourcequotas (aka 'quota')
* horizontalpodautoscalers (aka 'hpa')
* componentstatuses (aka 'cs')
* endpoints (aka 'ep')
error: Required resource not specified.
see 'kubectl get -h' for help.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
$ kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.3.0.1 <none> 443/TCP <none> 35m
$
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok nil
scheduler Healthy ok nil
etcd-0 Healthy {"health": "true"} nil
$ kubectl get ep
NAME ENDPOINTS AGE
kubernetes 172.17.4.101:443 47m
|
[ create a pod ]
I followed the instruction as below.
http://kubernetes.io/docs/user-guide/walkthrough/
Pod is a group of one or more containers.
- nginx container
Prepare a yaml file.
$ cat pod-nginx.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
|
Create a pod
$ kubectl create -f ./pod-nginx.yaml
pod "nginx" created
|
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 0/1 Pending 0 58s
|
Get env of pod “nginx”
$ kubectl exec nginx env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=nginx
KUBERNETES_PORT_443_TCP_ADDR=10.3.0.1
KUBERNETES_SERVICE_HOST=10.3.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.3.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.3.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
NGINX_VERSION=1.9.12-1~jessie
HOME=/root
|
$ kubectl exec nginx ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:02:47:03 brd ff:ff:ff:ff:ff:ff
inet 10.2.71.3/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe02:4703/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
|
$ kubectl exec -ti nginx /bin/bash
root@nginx:/# exit
exit
$ kubectl exec -ti nginx /bin/bash
root@nginx:/# ip a s | grep inet
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
inet 10.2.71.3/24 scope global eth0
inet6 fe80::42:aff:fe02:4703/64 scope link tentative dadfailed
root@nginx:/# exit
exit
$
|
$ kubectl get pod nginx -o=yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2016-03-22T14:35:08Z
name: nginx
namespace: default
resourceVersion: "1563"
selfLink: /api/v1/namespaces/default/pods/nginx
uid: 46e2cb05-f03b-11e5-8365-080027226901
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-oexsv
readOnly: true
dnsPolicy: ClusterFirst
nodeName: 172.17.4.201
restartPolicy: Always
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-oexsv
secret:
secretName: default-token-oexsv
status:
conditions:
- lastProbeTime: null
lastTransitionTime: null
status: "True"
type: Ready
containerStatuses:
- containerID: docker://4cea5da270079e10c54eaedb9a51b85ad7745a0ee5e06705742aed036dc68133
image: nginx
imageID: docker://sha256:af4b3d7d5401624ed3a747dc20f88e2b5e92e0ee9954aab8f1b5724d7edeca5e
lastState: {}
name: nginx
ready: true
restartCount: 0
state:
running:
startedAt: 2016-03-22T14:36:15Z
hostIP: 172.17.4.201
phase: Running
podIP: 10.2.71.3
startTime: 2016-03-22T14:35:08Z
|
$ kubectl delete pod nginx
pod "nginx" deleted
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
$
|
- persistent volume
$ cat pod-redis.yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis
volumeMounts:
- name: redis-persistent-storage
mountPath: /data/redis
volumes:
- name: redis-persistent-storage
emptyDir: {}
|
$ kubectl create -f pod-redis.yaml
pod "redis" created
|
$ kubectl get pod redis
NAME READY STATUS RESTARTS AGE
redis 1/1 Running 0 1m
|
$ kubectl exec redis env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=redis
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.3.0.1
KUBERNETES_SERVICE_HOST=10.3.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.3.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.3.0.1:443
GOSU_VERSION=1.7
REDIS_VERSION=3.0.7
REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-3.0.7.tar.gz
REDIS_DOWNLOAD_SHA1=e56b4b7e033ae8dbf311f9191cf6fdf3ae974d1c
HOME=/root
|
Log into the container and create a file.
$ kubectl exec redis -ti /bin/bash
root@redis:/data# mount | grep redis
/dev/sda9 on /data/redis type ext4 (rw,relatime,seclabel,data=ordered)
root@redis:/data#
root@redis:/data/redis# pwd
/data/redis
root@redis:/data/redis# echo hi > hi.txt
root@redis:/data/redis#
|
Redis container is running on w1(172.17.4.201)
$ kubectl get po redis -o=yaml
hostIP: 172.17.4.201
phase: Running
podIP: 10.2.18.3
startTime: 2016-03-23T15:12:44Z
|
$ vagrant ssh w1 -c "ip a s eth1"
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:1b:6f:06 brd ff:ff:ff:ff:ff:ff
inet 172.17.4.201/24 brd 172.17.4.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe1b:6f06/64 scope link
valid_lft forever preferred_lft forever
t$ vagrant ssh w1 -c "docker ps | grep redis"
df945b71accc redis "/entrypoint.sh redis" 14 minutes ago Up 14 minutes k8s_redis.f4ca2bdb_redis_default_b1f464fe-f109-11e5-a453-080027226901_58f0708f
1a8e1893e84c gcr.io/google_containers/pause:0.8.0 "/pause" 14 minutes ago Up 14 minutes k8s_POD.6d00e006_redis_default_b1f464fe-f109-11e5-a453-080027226901_c4cc95f3
Connection to 127.0.0.1 closed.
|
$ kubectl delete po redis
pod "redis" deleted
|
- Multiple containers
This does not work in my environment, but we can learn how we define multiple containers.
$ cat pod-multi-containers.yaml
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
|
$ kubectl create -f pod-multi-containers.yaml
pod "www" created
|
$ kubectl get po
NAME READY STATUS RESTARTS AGE
www 1/2 PullImageError 0 2m
|
$ kubectl delete pod www
pod "www" deleted
|