lost and found ( for me ? )

Showing posts with label Kubernetes. Show all posts
Showing posts with label Kubernetes. Show all posts

Kubernetes : deploy Ceph cluster as persistent volume

Here are logs when I set up Ceph clusters for persistent volumes.

Assume you have already set up Kubernetes cluster with Juju and MAAS.
http://lost-and-found-narihiro.blogspot.jp/2017/07/ubuntu-1604-deploy-kubernetes-
cluster.html

MAAS : MAAS Version 1.9.5+bzr4599-0ubuntu1 (14.04.1)
Juju : 2.2.2-xenial-amd64

Before deploying Ceph.

Juju GUI

K8s dashboard
No persistent volumes

[ deploy Ceph clusters with Juju ]

https://jujucharms.com/ceph-mon/
https://jujucharms.com/ceph-osd/

- Ceph mon

# juju deploy cs:ceph-mon -n 3

# juju status ceph-mon --format short

- ceph-mon/3: 192.168.40.42 (agent:allocating, workload:waiting)
- ceph-mon/4: 192.168.40.40 (agent:allocating, workload:waiting)
- ceph-mon/5: 192.168.40.41 (agent:allocating, workload:waiting)

# juju status ceph-mon --format short

- ceph-mon/3: 192.168.40.42 (agent:idle, workload:active)
- ceph-mon/4: 192.168.40.40 (agent:idle, workload:active)
- ceph-mon/5: 192.168.40.41 (agent:idle, workload:active)

Juju GUI after deploying ceph-mon.

- Ceph osd

# cat ceph-osd-config.yaml
ceph-osd:
   osd-devices: /dev/vdb

# juju deploy cs:ceph-osd -n 3 --config ceph-osd-config.yaml

# juju status ceph-osd --format short

- ceph-osd/0: 192.168.40.45 (agent:allocating, workload:waiting)
- ceph-osd/1: 192.168.40.43 (agent:allocating, workload:waiting)
- ceph-osd/2: 192.168.40.44 (agent:allocating, workload:waiting)

# juju add-relation ceph-mon ceph-osd

# juju status ceph-mon ceph-osd --format short

- ceph-mon/3: 192.168.40.42 (agent:executing, workload:active)
- ceph-mon/4: 192.168.40.40 (agent:executing, workload:active)
- ceph-mon/5: 192.168.40.41 (agent:executing, workload:active)
- ceph-osd/0: 192.168.40.45 (agent:executing, workload:active)
- ceph-osd/1: 192.168.40.43 (agent:executing, workload:active)
- ceph-osd/2: 192.168.40.44 (agent:executing, workload:active)



# juju add-relation kubernetes-master ceph-mon


# juju run-action kubernetes-master/0 create-rbd-pv name=test size=50

# juju ssh kubernetes-master/0


$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
test      50M        RWO           Retain          Available             rbd                      17s

$ kubectl get pvc
No resources found.

on Dashboard

Reference
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

create a persistent volume claim.
ubuntu@m-node05:~$ cat pv-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: test-pv-claim
spec:
 storageClassName: rbd
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 3M


ubuntu@m-node05:~$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
test      50M        RWO           Retain          Available             rbd                      20m

ubuntu@m-node05:~$ kubectl create -f pv-claim.yaml
persistentvolumeclaim "test-pv-claim" created

ubuntu@m-node05:~$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                   STORAGECLASS   REASON    AGE
test      50M        RWO           Retain          Bound     default/test-pv-claim   rbd                      20m

ubuntu@m-node05:~$ kubectl get pvc
NAME            STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
test-pv-claim   Bound     test      50M        RWO           rbd            7s
ubuntu@m-node05:~$


create a pod with PVC.
ubuntu@m-node05:~$ cat create-a-pod-with-pvc.yaml
kind: Pod
apiVersion: v1
metadata:
 name: task-pv-pod
spec:

 volumes:
   - name: task-pv-storage
     persistentVolumeClaim:
      claimName: test-pv-claim

 containers:
   - name: task-pv-container
     image: nginx
     ports:
       - containerPort: 80
         name: "http-server"
     volumeMounts:
     - mountPath: "/usr/share/nginx/html"
       name: task-pv-storage

ubuntu@m-node05:~$ kubectl create -f create-a-pod-with-pvc.yaml
pod "task-pv-pod" created

$ kubectl get pod task-pv-pod
NAME          READY     STATUS    RESTARTS   AGE
task-pv-pod   1/1       Running   0          48s

ubuntu@m-node05:~$ kubectl exec -it task-pv-pod -- /bin/bash

root@task-pv-pod:~# df -h | grep rbd
/dev/rbd0        46M  2.6M   44M   6% /usr/share/nginx/html

root@task-pv-pod:~# apt update;apt install curl -y

root@task-pv-pod:/# echo 'hello world' > /usr/share/nginx/html/index.html

root@task-pv-pod:/# curl http://127.0.0.1
hello world

accecc to a ceph-mon node.
$ juju ssh ceph-mon/3

ubuntu@m-node10:~$ sudo ceph health
HEALTH_OK

ubuntu@m-node10:~$ sudo ceph osd stat
    osdmap e15: 3 osds: 3 up, 3 in
           flags sortbitwise,require_jewel_osds

ubuntu@m-node10:~$ sudo ceph -s
   cluster 80009c18-729c-11e7-9d93-5254009250af
    health HEALTH_OK
    monmap e1: 3 mons at {m-node08=192.168.40.40:6789/0,m-node09=192.168.40.41:6789/0,m-node10=192.168.40.42:6789/0}
           election epoch 4, quorum 0,1,2 m-node08,m-node09,m-node10
    osdmap e14: 3 osds: 3 up, 3 in
           flags sortbitwise,require_jewel_osds
     pgmap v145: 64 pgs, 1 pools, 8836 kB data, 8 objects
           109 MB used, 58225 MB / 58334 MB avail
                 64 active+clean

ubuntu@m-node10:~$ sudo ceph
ceph> health
HEALTH_OK

ceph> status
   cluster 80009c18-729c-11e7-9d93-5254009250af
    health HEALTH_OK
    monmap e1: 3 mons at {m-node08=192.168.40.40:6789/0,m-node09=192.168.40.41:6789/0,m-node10=192.168.40.42:6789/0}
           election epoch 4, quorum 0,1,2 m-node08,m-node09,m-node10
    osdmap e14: 3 osds: 3 up, 3 in
           flags sortbitwise,require_jewel_osds
     pgmap v145: 64 pgs, 1 pools, 8836 kB data, 8 objects
           109 MB used, 58225 MB / 58334 MB avail
                 64 active+clean

ceph> exit

Ubuntu 16.04: deploy Kubernetes cluster with Juju and MAAS

Here are logs when I set up Kubernetes cluster with Juju and MAAS.

MAAS Version 1.9.5+bzr4599-0ubuntu1 (14.04.1)
Juju 2.2.2 ( 2.2.2-xenial-amd64 )

Assume you have already set up Juju and MAAS.
http://lost-and-found-narihiro.blogspot.jp/2016/11/ubuntu-1604-install-maas-within-ubuntu.html

[ deploy Kubernetes clusters ]

download a bundle from https://jujucharms.com/canonical-kubernetes/ and deployed K8s with Juju
# juju deploy ./bundle01.yaml

Here is a bundle I used.
# cat bundle01.yaml
series: xenial
description: 'A nine-machine Kubernetes cluster, appropriate for production. Includes
 a

 three-machine etcd cluster and three Kubernetes worker nodes.

 '
services:
 easyrsa:
   annotations:
     gui-x: '450'
     gui-y: '550'
   charm: cs:~containers/easyrsa-12
   num_units: 1
 etcd:
   annotations:
     gui-x: '800'
     gui-y: '550'
   charm: cs:~containers/etcd-40
   num_units: 1
 flannel:
   annotations:
     gui-x: '450'
     gui-y: '750'
   charm: cs:~containers/flannel-20
 kubeapi-load-balancer:
   annotations:
     gui-x: '450'
     gui-y: '250'
   charm: cs:~containers/kubeapi-load-balancer-16
   expose: true
   num_units: 1
 kubernetes-master:
   annotations:
     gui-x: '800'
     gui-y: '850'
   charm: cs:~containers/kubernetes-master-35
   num_units: 1
   options:
     channel: 1.7/stable
 kubernetes-worker:
   annotations:
     gui-x: '100'
     gui-y: '850'
   charm: cs:~containers/kubernetes-worker-40
   expose: true
   num_units: 2
   options:
     channel: 1.7/stable
relations:
- - kubernetes-master:kube-api-endpoint
 - kubeapi-load-balancer:apiserver
- - kubernetes-master:loadbalancer
 - kubeapi-load-balancer:loadbalancer
- - kubernetes-master:kube-control
 - kubernetes-worker:kube-control
- - kubernetes-master:certificates
 - easyrsa:client
- - etcd:certificates
 - easyrsa:client
- - kubernetes-master:etcd
 - etcd:db
- - kubernetes-worker:certificates
 - easyrsa:client
- - kubernetes-worker:kube-api-endpoint
 - kubeapi-load-balancer:website
- - kubeapi-load-balancer:certificates
 - easyrsa:client
- - flannel:etcd
 - etcd:db
- - flannel:cni
 - kubernetes-master:cni
- - flannel:cni
 - kubernetes-worker:cni

After deploying K8s.
# juju status --format short

- easyrsa/0: 192.168.40.36 (agent:idle, workload:active)
- etcd/0: 192.168.40.32 (agent:idle, workload:active) 2379/tcp
- kubeapi-load-balancer/0: 192.168.40.37 (agent:idle, workload:active) 443/tcp
- kubernetes-master/0: 192.168.40.35 (agent:idle, workload:active) 6443/tcp
 - flannel/0: 192.168.40.35 (agent:idle, workload:active)
- kubernetes-worker/0: 192.168.40.38 (agent:idle, workload:active) 80/tcp, 443/tcp
 - flannel/2: 192.168.40.38 (agent:idle, workload:active)
- kubernetes-worker/1: 192.168.40.39 (agent:idle, workload:active) 80/tcp, 443/tcp
 - flannel/1: 192.168.40.39 (agent:idle, workload:active)

[ access to the dashboard ]

# juju config kubernetes-master enable-dashboard-addons=true
WARNING the configuration setting "enable-dashboard-addons" already has the value "true"

ssh to the kubernetes-master and look into a “config” file.
You will find an IP address of API server, username and credentials.
# juju ssh kubernetes-master/0

ubuntu@m-node05:~$ cat config
   server: https://192.168.40.37:443
users:
- name: admin
 user:
   password: credentials
   username: admin

ubuntu@m-node05:~$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Heapster is running at http://localhost:8080/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Grafana is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
InfluxDB is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Access to the https:// <IP>/ui , enter username and credentials.