lost and found ( for me ? )

CentOS7 : install and set up GlusterFS

Here are logs when setting up GlusterFS on CentOS7.

Reference
http://www.linuxtechi.com/setup-glusterfs-storage-on-centos-7-rhel-7/

Five VMs are running within KVM.
Four VMs are for Gluster FS servers and one for client.
$ virsh list --all | grep gluster
26    cent7-gluster02                running
27    cent7-gluster03                running
29    cent7-gluster01                running
30    cent7-gluster04                running
31    cent7-gluster-client           running

Edit /etc/hosts so that gluster servers can communicate with each other.
[root@gluster01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.130.110 gluster01 gluster01.localdomain
192.168.130.111 gluster02 gluster02.localdomain
192.168.130.112 gluster03 gluster03.localdomain
192.168.130.113 gluster04 gluster04.localdomain

add a block storage to the Gluster servers.
Now gluster servers have only one storage device, /dev/vda.
root@gluster01 ~]# ls /dev/vd*
/dev/vda  /dev/vda1
[root@gluster01 ~]#

on the KVM machine, create a block storage for three gluster servers.
$ sudo dd if=/dev/zero of=/var/lib/libvirt/images/gluster01-block.img bs=1M seek=4096 count=0
$ sudo dd if=/dev/zero of=/var/lib/libvirt/images/gluster02-block.img bs=1M seek=4096 count=0
$ sudo dd if=/dev/zero of=/var/lib/libvirt/images/gluster03-block.img bs=1M seek=4096 count=0
$ sudo dd if=/dev/zero of=/var/lib/libvirt/images/gluster04-block.img bs=1M seek=4096 count=0

Attach a disk to a VM.
$ sudo virsh attach-disk cent7-gluster01 /var/lib/libvirt/images/gluster01-block.img vdb --cache none
Disk attached successfully

$ sudo virsh attach-disk cent7-gluster02 /var/lib/libvirt/images/gluster02-block.img vdb --cache noneDisk attached successfully

$ sudo virsh attach-disk cent7-gluster03 /var/lib/libvirt/images/gluster03-block.img vdb --cache none
Disk attached successfully

$ sudo virsh attach-disk cent7-gluster04 /var/lib/libvirt/images/gluster04-block.img vdb --cache none
Disk attached successfully

Confirm vdb has been added.
$ ansible gluster -m shell -a 'ls /dev/vd*'
192.168.130.112 | success | rc=0 >>
/dev/vda
/dev/vda1
/dev/vdb

192.168.130.110 | success | rc=0 >>
/dev/vda
/dev/vda1
/dev/vdb

192.168.130.111 | success | rc=0 >>
/dev/vda
/dev/vda1
/dev/vdb

192.168.130.113 | success | rc=0 >>
/dev/vda
/dev/vda1
/dev/vdb

install gluster server
$ ansible gluster -m shell -a 'yum install -y centos-release-gluster epel-release'

$ ansible gluster -m shell -a 'yum install -y glusterfs-server'

$ ansible gluster -m shell -a 'yum install -y glusterfs-server'

$ ansible gluster -m shell -a 'systemctl start glusterd'

$ ansible gluster -m shell -a 'systemctl enable glusterd'

[ volume setup : distributed volume ]

create a trusted storage pool between gluster01 and gluster02.

on the gluster01
[root@gluster01 ~]# gluster peer probe gluster02
peer probe: success.

[root@gluster01 ~]# gluster peer status
Number of Peers: 1

Hostname: gluster02
Uuid: e8a4bb2e-3289-4002-9f33-0775f39e9588
State: Peer in Cluster (Connected)

Create a brick on the gluster01
[root@gluster01 ~]# pvcreate /dev/vdb

[root@gluster01 ~]# vgcreate vg-bricks /dev/vdb

[root@gluster01 ~]# lvcreate -L 2G -n lv-brick01 vg-bricks

[root@gluster01 ~]# lvdisplay
 --- Logical volume ---
 LV Path                /dev/vg-bricks/lv-brick01
 LV Name                lv-brick01
 VG Name                vg-bricks
 LV UUID                jOFkXI-h13U-tVju-a416-bHFX-xlZO-CF3Uo3
 LV Write Access        read/write
 LV Creation host, time gluster01.localdomain, 2016-09-26 13:26:28 +0900
 LV Status              available
 # open                 0
 LV Size                2.00 GiB
 Current LE             512
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     8192
 Block device           252:0

[root@gluster01 ~]# mkfs.xfs -i size=512 /dev/vg-bricks/lv-brick01
[root@gluster01 ~]# mkdir -p /bricks/dist_brick1
[root@gluster01 ~]# mount /dev/vg-bricks/lv-brick01 /bricks/dist_brick1

on the gluster02
[root@gluster02 ~]# pvcreate /dev/vdb
 Physical volume "/dev/vdb" successfully created

[root@gluster02 ~]# vgcreate vg-bricks /dev/vdb
 Volume group "vg-bricks" successfully created


[root@gluster02 ~]# lvcreate -L 2G -n lv-brick01 vg-bricks
 Logical volume "lv-brick01" created.

[root@gluster02 ~]# mkfs.xfs -i size=512 /dev/vg-bricks/lv-brick01
meta-data=/dev/vg-bricks/lv-brick01 isize=512    agcount=4, agsize=131072 blks
        =                       sectsz=512   attr=2, projid32bit=1
        =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=524288, imaxpct=25
        =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
        =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@gluster02 ~]# mkdir -p /bricks/dist_brick1

[root@gluster02 ~]# mount /dev/vg-bricks/lv-brick01 /bricks/dist_brick1
[root@gluster02 ~]#

Create a distributed volume
on the gluster01

error, needs to specify sub directory.
[root@gluster01 ~]# gluster volume create distvol gluster01:/bricks/dist_brick1 gluster02:/bricks/dist_brick1
volume create: distvol: failed: The brick gluster01:/bricks/dist_brick1 is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.

create a distributed volume.
[root@gluster01 ~]# gluster volume create distvol gluster01:/bricks/dist_brick1/brick gluster02:/bricks/dist_brick1/brick
volume create: distvol: success: please start the volume to access data
[root@gluster01 ~]#

start the volume
[root@gluster01 ~]# gluster volume start distvol
volume start: distvol: success

[root@gluster01 ~]# gluster volume info distvol

Volume Name: distvol
Type: Distribute
Volume ID: 0f5e64e6-3219-4ab3-be57-c3c9da049e7e
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster01:/bricks/dist_brick1/brick
Brick2: gluster02:/bricks/dist_brick1/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

mount the volume via client.
on the client
[root@gluster-client ~]# yum install -y centos-release-gluster epel-release

[root@gluster-client ~]#  yum install glusterfs-fuse –y

[root@gluster-client ~]# mount -t glusterfs -o acl gluster01:/distvol /mnt/distvol/

[root@gluster-client ~]# mount| grep gluster
gluster01:/distvol on /mnt/distvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072)

create some files on the client.
[root@gluster-client ~]# cd /mnt/distvol/
[root@gluster-client distvol]# pwd
/mnt/distvol

[root@gluster-client distvol]# for i in {1..10};do echo $i > $i.txt;done

[root@gluster-client distvol]# ls
1.txt  10.txt  2.txt  3.txt  4.txt  5.txt  6.txt  7.txt  8.txt  9.txt
[root@gluster-client distvol]#

on the gluster01
[root@gluster01 ~]# ls /bricks/dist_brick1/brick/
4.txt  8.txt  9.txt

on the gluster02
[root@gluster02 ~]# ls /bricks/dist_brick1/brick/
1.txt  10.txt  2.txt  3.txt  5.txt  6.txt  7.txt

some files are stored in gluster01 and the others are in gluster02.

suspend gluster01 VM
on the KVM host.
virsh # suspend cent7-gluster01
Domain cent7-gluster01 suspended

on the gluster02
[root@gluster02 ~]# gluster peer status
Number of Peers: 1

Hostname: gluster01
Uuid: 04666978-f467-497f-9d7a-86eec03e0ed7
State: Peer in Cluster (Disconnected)

on the client, create 10 files ( a_0.txt .. a_10.txt )
[root@gluster-client distvol]# for i in {1..10};do echo $i > a_$i.txt;done
-bash: a_1.txt: Transport endpoint is not connected
-bash: a_2.txt: Transport endpoint is not connected
-bash: a_5.txt: Transport endpoint is not connected

[root@gluster-client distvol]# ls
1.txt  10.txt  2.txt  3.txt  5.txt  6.txt  7.txt  a_10.txt  a_3.txt  a_4.txt  a_6.txt  a_7.txt  a_8.txt  a_9.txt


[root@gluster02 ~]# ls /bricks/dist_brick1/brick/
1.txt  10.txt  2.txt  3.txt  5.txt  6.txt  7.txt  a_10.txt  a_3.txt  a_4.txt  a_6.txt  a_7.txt  a_8.txt  a_9.txt
[root@gluster02 ~]#

on the client
[root@gluster-client distvol]# pwd
/mnt/distvol
[root@gluster-client distvol]# rm -f *.txt
[root@gluster-client distvol]# ls
[root@gluster-client distvol]#

[root@gluster-client distvol]# for i in {1..10};do echo $i > a_$i.txt;done
-bash: a_1.txt: Transport endpoint is not connected
-bash: a_2.txt: Transport endpoint is not connected
-bash: a_5.txt: Transport endpoint is not connected
[root@gluster-client distvol]#

resume gluster01 VM
virsh # resume cent7-gluster01
Domain cent7-gluster01 resumed

virsh #

status is connected
[root@gluster02 ~]# gluster peer status
Number of Peers: 1

Hostname: gluster01
Uuid: 04666978-f467-497f-9d7a-86eec03e0ed7
State: Peer in Cluster (Connected)

[root@gluster01 ~]# gluster peer status
Number of Peers: 1

Hostname: gluster02
Uuid: e8a4bb2e-3289-4002-9f33-0775f39e9588
State: Peer in Cluster (Connected)

[root@gluster-client distvol]# rm *.txt -f
[root@gluster-client distvol]# for i in {1..10};do echo $i > a_$i.txt;done
[root@gluster-client distvol]#

[ replicate volume ]

set up a replicate volume on gluster03 and 04 servers.

on the gluster01
[root@gluster01 ~]# gluster peer probe gluster03
peer probe: success.
[root@gluster01 ~]# gluster peer probe gluster04
peer probe: success.
[root@gluster01 ~]#

$ ansible gluster -m shell -a 'gluster peer status'
192.168.130.112 | success | rc=0 >>
Number of Peers: 3

Hostname: gluster01
Uuid: 04666978-f467-497f-9d7a-86eec03e0ed7
State: Peer in Cluster (Connected)

Hostname: gluster02
Uuid: e8a4bb2e-3289-4002-9f33-0775f39e9588
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 905205fb-291b-4d54-b786-00b22a4cdcd9
State: Peer in Cluster (Connected)

192.168.130.111 | success | rc=0 >>
Number of Peers: 3

Hostname: gluster01
Uuid: 04666978-f467-497f-9d7a-86eec03e0ed7
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: 0008cf97-7ed1-46d4-b795-9d7bd03c2667
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 905205fb-291b-4d54-b786-00b22a4cdcd9
State: Peer in Cluster (Connected)

192.168.130.113 | success | rc=0 >>
Number of Peers: 3

Hostname: gluster01
Uuid: 04666978-f467-497f-9d7a-86eec03e0ed7
State: Peer in Cluster (Connected)

Hostname: gluster02
Uuid: e8a4bb2e-3289-4002-9f33-0775f39e9588
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: 0008cf97-7ed1-46d4-b795-9d7bd03c2667
State: Peer in Cluster (Connected)

192.168.130.110 | success | rc=0 >>
Number of Peers: 3

Hostname: gluster02
Uuid: e8a4bb2e-3289-4002-9f33-0775f39e9588
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: 0008cf97-7ed1-46d4-b795-9d7bd03c2667
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 905205fb-291b-4d54-b786-00b22a4cdcd9
State: Peer in Cluster (Connected)

on the gluster03, create a volume.
[root@gluster03 ~]# pvcreate /dev/vdb ; vgcreate vg_bricks /dev/vdb
[root@gluster03 ~]# lvcreate -L 2G -n lv-brick01 vg_bricks
[root@gluster03 ~]# mkfs.xfs -i size=512 /dev/vg_bricks/lv-brick01
[root@gluster03 ~]# mkdir -p /bricks/shadow_brick1
[root@gluster03 ~]# mount /dev/vg_bricks/lv-brick01 /bricks/shadow_brick1
[root@gluster03 ~]# mkdir /bricks/shadow_brick1/brick

on the gluster04
[root@gluster04 ~]# pvcreate /dev/vdb ; vgcreate vg_bricks /dev/vdb
 Physical volume "/dev/vdb" successfully created
 Volume group "vg_bricks" successfully created

[root@gluster04 ~]#  lvcreate -L 2G -n lv-brick01 vg_bricks
 Logical volume "lv-brick01" created.

[root@gluster04 ~]# mkfs.xfs -i size=512 /dev/vg_bricks/lv-brick01
meta-data=/dev/vg_bricks/lv-brick01 isize=512    agcount=4, agsize=131072 blks
        =                       sectsz=512   attr=2, projid32bit=1
        =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=524288, imaxpct=25
        =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
        =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@gluster04 ~]# mkdir -p /bricks/shadow_brick1

[root@gluster04 ~]# mount /dev/vg_bricks/lv-brick01 /bricks/shadow_brick1

[root@gluster04 ~]# mkdir /bricks/shadow_brick1/brick
[root@gluster04 ~]#

create a replicate volume on the gluster03
[root@gluster03 ~]# gluster volume create shadowvol replica 2 gluster03:/bricks/shadow_brick1/brick gluster04:/bricks/shado
w_brick1/brick
volume create: shadowvol: success: please start the volume to access data
[root@gluster03 ~]#

[root@gluster03 ~]# gluster volume start shadowvol
volume start: shadowvol: success

[root@gluster03 ~]# gluster volume info shadowvol

Volume Name: shadowvol
Type: Replicate
Volume ID: 145c0bb8-0fe9-45b8-861e-3f67a099db82
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster03:/bricks/shadow_brick1/brick
Brick2: gluster04:/bricks/shadow_brick1/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
[root@gluster03 ~]#

on the client
[root@gluster-client ~]# mkdir /mnt/shadowvol
[root@gluster-client ~]# mount -t glusterfs gluster03:/shadowvol /mnt/shadowvol/
[root@gluster-client ~]#

oot@gluster-client ~]# mount| grep glusterfs
gluster01:/distvol on /mnt/distvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072)
gluster03:/shadowvol on /mnt/shadowvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

create files
[root@gluster-client ~]# cd /mnt/shadowvol/
[root@gluster-client shadowvol]# for i in {1..10};do echo $i > a_$i.txt;done
[root@gluster-client shadowvol]# ls
a_1.txt  a_10.txt  a_2.txt  a_3.txt  a_4.txt  a_5.txt  a_6.txt  a_7.txt  a_8.txt  a_9.txt
[root@gluster-client shadowvol]#

all files are replicated on both gluster03 and gluster04.
[root@gluster03 ~]# ls /bricks/shadow_brick1/brick/
a_1.txt  a_10.txt  a_2.txt  a_3.txt  a_4.txt  a_5.txt  a_6.txt  a_7.txt  a_8.txt  a_9.txt
[root@gluster03 ~]#

[root@gluster04 ~]# ls /bricks/shadow_brick1/brick/
a_1.txt  a_10.txt  a_2.txt  a_3.txt  a_4.txt  a_5.txt  a_6.txt  a_7.txt  a_8.txt  a_9.txt
[root@gluster04 ~]#

suspend gluster04 VM.
virsh # suspend cent7-gluster04
Domain cent7-gluster04 suspended

virsh #

on the gluster03
[root@gluster03 ~]# gluster peer status
Number of Peers: 3

Hostname: gluster01
Uuid: 04666978-f467-497f-9d7a-86eec03e0ed7
State: Peer in Cluster (Connected)

Hostname: gluster02
Uuid: e8a4bb2e-3289-4002-9f33-0775f39e9588
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 905205fb-291b-4d54-b786-00b22a4cdcd9
State: Peer in Cluster (Disconnected)
[root@gluster03 ~]#

create files on the client.
[root@gluster-client shadowvol]# for i in {1..10};do echo $i > b_$i.txt;done
[root@gluster-client shadowvol]# ls b_*
b_1.txt  b_10.txt  b_2.txt  b_3.txt  b_4.txt  b_5.txt  b_6.txt  b_7.txt  b_8.txt  b_9.txt
[root@gluster-client shadowvol]#

resume gluster04 VM.
virsh # resume cent7-gluster04
Domain cent7-gluster04 resumed

virsh #

on the gluster04
all files start with b_*.txt were replicated on the gluster04.
[root@gluster04 ~]# gluster peer status
Number of Peers: 3

Hostname: gluster01
Uuid: 04666978-f467-497f-9d7a-86eec03e0ed7
State: Peer in Cluster (Connected)

Hostname: gluster02
Uuid: e8a4bb2e-3289-4002-9f33-0775f39e9588
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: 0008cf97-7ed1-46d4-b795-9d7bd03c2667
State: Peer in Cluster (Connected)
[root@gluster04 ~]#

[root@gluster04 ~]# ls /bricks/shadow_brick1/brick/
a_1.txt   a_2.txt  a_4.txt  a_6.txt  a_8.txt  b_1.txt   b_2.txt  b_4.txt  b_6.txt  b_8.txt
a_10.txt  a_3.txt  a_5.txt  a_7.txt  a_9.txt  b_10.txt  b_3.txt  b_5.txt  b_7.txt  b_9.txt
[root@gluster04 ~]#

[ distributed and replicated volume ]

gluster01
[root@gluster01 ~]# lvcreate -L 100M -n prod_brick1 vg-bricks
[root@gluster01 ~]# mkfs.xfs -i size=512 /dev/vg-bricks/prod_brick1
[root@gluster01 ~]# mkdir -p /bricks/prod_brick1
[root@gluster01 ~]# mount /dev/vg-bricks/prod_brick1 /bricks/prod_brick1/
[root@gluster01 ~]# mkdir /bricks/prod_brick1/brick

gluster02
[root@gluster02 ~]# lvcreate -L 100M -n prod_brick1 vg-bricks
[root@gluster02 ~]# mkfs.xfs -i size=512 /dev/vg-bricks/prod_brick1
[root@gluster02 ~]# mkdir -p /bricks/prod_brick1
[root@gluster02 ~]#  mount /dev/vg-bricks/prod_brick1 /bricks/prod_brick1/
[root@gluster02 ~]# mkdir /bricks/prod_brick1/brick
[root@gluster02 ~]#

gluster03
  38  lvcreate -L 100M -n prod_brick1 vg_bricks
  40  mkfs.xfs -i size=512 /dev/vg_bricks/prod_brick1
  41  mkdir -p /bricks/prod_brick1
 45  mount /dev/vg_bricks/prod_brick1 /bricks/prod_brick1/
  46  mkdir /bricks/prod_brick1/brick

gluster04
[root@gluster04 ~]#  lvcreate -L 100M -n prod_brick1 vg_bricks
 Logical volume "prod_brick1" created.
[root@gluster04 ~]# mkfs.xfs -i size=512 /dev/vg_bricks/prod_brick1
[root@gluster04 ~]# mkdir -p /bricks/prod_brick1
[root@gluster04 ~]# mount /dev/vg_bricks/prod_brick1 /bricks/prod_brick1/
[root@gluster04 ~]# mkdir /bricks/prod_brick1/brick
[root@gluster04 ~]#

create a volume.
[root@gluster01 ~]# gluster volume create dist-rep-vol replica 2 gluster01:/bricks/prod_brick1/brick gluster02:/bricks/prod_brick1/brick gluster03:/bricks/prod_brick1/brick gluster04:/bricks/prod_brick1/brick force

volume create: dist-rep-vol: success: please start the volume to access data
[root@gluster01 ~]#

[root@gluster01 ~]# gluster volume start dist-rep-vol
volume start: dist-rep-vol: success
[root@gluster01 ~]#
[root@gluster01 ~]# gluster volume info dist-rep-vol

Volume Name: dist-rep-vol
Type: Distributed-Replicate
Volume ID: d3114800-9ab1-4c9d-99a2-160ae7dd5fc9
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gluster01:/bricks/prod_brick1/brick
Brick2: gluster02:/bricks/prod_brick1/brick
Brick3: gluster03:/bricks/prod_brick1/brick
Brick4: gluster04:/bricks/prod_brick1/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
[root@gluster01 ~]#

on the client
[root@gluster-client ~]# mkdir  /mnt/dist-rep-vol
[root@gluster-client ~]# mount.glusterfs gluster01:/dist-rep-vol /mnt/dist-rep-vol/
[root@gluster-client ~]# mount| grep glusterfs
gluster01:/distvol on /mnt/distvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072)
gluster03:/shadowvol on /mnt/shadowvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
gluster01:/dist-rep-vol on /mnt/dist-rep-vol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@gluster-client ~]#

create files
[root@gluster-client ~]# cd /mnt/dist-rep-vol/
[root@gluster-client dist-rep-vol]#
[root@gluster-client dist-rep-vol]# for i in {1..10};do echo $i > a_$i.txt;done
[root@gluster-client dist-rep-vol]# ls
a_1.txt  a_10.txt  a_2.txt  a_3.txt  a_4.txt  a_5.txt  a_6.txt  a_7.txt  a_8.txt  a_9.txt
[root@gluster-client dist-rep-vol]#

110 : gluster01
111 : gluster02
112 : gluster03
113 : gluster04

$ ansible gluster -m shell -a 'ls /bricks/prod_brick1/brick'
192.168.130.110 | success | rc=0 >>
a_1.txt
a_2.txt
a_5.txt

192.168.130.112 | success | rc=0 >>
a_10.txt
a_3.txt
a_4.txt
a_6.txt
a_7.txt
a_8.txt
a_9.txt

192.168.130.113 | success | rc=0 >>
a_10.txt
a_3.txt
a_4.txt
a_6.txt
a_7.txt
a_8.txt
a_9.txt

192.168.130.111 | success | rc=0 >>
a_1.txt
a_2.txt
a_5.txt

[root@gluster-client dist-rep-vol]# rm a_*.txt -f
[root@gluster-client dist-rep-vol]#

suspend gluster01
virsh # suspend cent7-gluster01
Domain cent7-gluster01 suspended

virsh #

$ ansible gluster -m shell -a 'gluster peer status'
192.168.130.113 | success | rc=0 >>
Number of Peers: 3

Hostname: gluster01
Uuid: 04666978-f467-497f-9d7a-86eec03e0ed7
State: Peer in Cluster (Disconnected)

Hostname: gluster02
Uuid: e8a4bb2e-3289-4002-9f33-0775f39e9588
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: 0008cf97-7ed1-46d4-b795-9d7bd03c2667
State: Peer in Cluster (Connected)

192.168.130.112 | success | rc=0 >>
Number of Peers: 3

Hostname: gluster01
Uuid: 04666978-f467-497f-9d7a-86eec03e0ed7
State: Peer in Cluster (Disconnected)

Hostname: gluster02
Uuid: e8a4bb2e-3289-4002-9f33-0775f39e9588
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 905205fb-291b-4d54-b786-00b22a4cdcd9
State: Peer in Cluster (Connected)

192.168.130.111 | success | rc=0 >>
Number of Peers: 3

Hostname: gluster01
Uuid: 04666978-f467-497f-9d7a-86eec03e0ed7
State: Peer in Cluster (Disconnected)

Hostname: gluster03
Uuid: 0008cf97-7ed1-46d4-b795-9d7bd03c2667
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 905205fb-291b-4d54-b786-00b22a4cdcd9
State: Peer in Cluster (Connected)

192.168.130.110 | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue

confirm client can access to the other gluster servers, gluster02,03,04.

on the client
[root@gluster-client dist-rep-vol]# pwd
/mnt/dist-rep-vol
[root@gluster-client dist-rep-vol]# for i in {1..10};do echo $i > b_$i.txt;done
[root@gluster-client dist-rep-vol]# ls
b_1.txt  b_10.txt  b_2.txt  b_3.txt  b_4.txt  b_5.txt  b_6.txt  b_7.txt  b_8.txt  b_9.txt
[root@gluster-client dist-rep-vol]#

$ ansible gluster -m shell -a 'ls /bricks/prod_brick1/brick'
192.168.130.111 | success | rc=0 >>
b_10.txt
b_4.txt
b_5.txt
b_6.txt
b_8.txt
b_9.txt

192.168.130.113 | success | rc=0 >>
b_1.txt
b_2.txt
b_3.txt
b_7.txt

192.168.130.112 | success | rc=0 >>
b_1.txt
b_2.txt
b_3.txt
b_7.txt

192.168.130.110 | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue

resume gluster01
virsh # resume  cent7-gluster01
Domain cent7-gluster01 resumed

virsh #

[root@gluster02 ~]# gluster volume status dist-rep-vol
Status of volume: dist-rep-vol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01:/bricks/prod_brick1/brick   49153     0          Y       3193
Brick gluster02:/bricks/prod_brick1/brick   49153     0          Y       11277
Brick gluster03:/bricks/prod_brick1/brick   49153     0          Y       11299
Brick gluster04:/bricks/prod_brick1/brick   49153     0          Y       2923
Self-heal Daemon on localhost               N/A       N/A        Y       11458
Self-heal Daemon on gluster03               N/A       N/A        Y       11442
Self-heal Daemon on gluster04               N/A       N/A        Y       3067
Self-heal Daemon on gluster01               N/A       N/A        Y       3280

Task Status of Volume dist-rep-vol
------------------------------------------------------------------------------
There are no active volume tasks

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.