Reference
I set up HA Proxy and serf by using a Vagrant file provided by above URL.
Many thanks!
This is very helpful to learn how serf works.
download Vagrantfile.
$ git clone https://github.com/tcnksm-sample/serf-haproxy.git
$ cd serf-haproxy/
$ cp Vagrantfile Vagrantfile.org
|
- edit Vagrantfile.
I already have trusty64 box list, so I used that instead of downloading precise64.
I also change serf version to 0.6.4 from 0.5.0.
So I changed three things.
precise64 -> trusty64
serf version : 0.5.0 -> 0.6.4
delete vm.box_url line.
$ diff Vagrantfile Vagrantfile.org
5c5
< wget https://dl.bintray.com/mitchellh/serf/0.6.4_linux_amd64.zip -O serf.zip
---
> wget https://dl.bintray.com/mitchellh/serf/0.5.0_linux_amd64.zip -O serf.zip
14c14,15
< config.vm.box = "trusty64"
---
> config.vm.box = "precise64"
> config.vm.box_url = "http://files.vagrantup.com/precise64.box"
|
start VMs.
Three VMs will run, one is haproxy and the other two are web servers.
$ vagrant up
|
$ vagrant status
Current machine states:
proxy running (virtualbox)
web1 running (virtualbox)
web2 running (virtualbox)
|
- access to proxy
$ vagrant ssh proxy
$ cd /vagrant
$ ls
basic handler.rb proxy.json Vagrantfile Vagrantfile.org web.json
|
IP address of HA Proxy is 172.20.20.10.
$ ifconfig eth0 | grep 'inet addr'
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
$ ifconfig eth1 | grep 'inet addr'
inet addr:172.20.20.10 Bcast:172.20.20.255 Mask:255.255.255.0
$ which serf
/usr/bin/serf
$ /usr/bin/serf --version
Serf v0.6.4
Agent Protocol: 4 (Understands back to: 2)
|
run serf on the proxy.
$ sudo serf agent -config-file=proxy.json
==> Starting Serf agent...
==> Starting Serf agent RPC...
==> Serf agent running!
Node name: 'proxy'
Bind addr: '172.20.20.10:7946'
RPC addr: '127.0.0.1:7373'
Encrypted: false
Snapshot: false
Profile: lan
==> Log data will now stream in as it occurs:
2015/07/01 22:33:01 [INFO] agent: Serf agent starting
2015/07/01 22:33:01 [INFO] serf: EventMemberJoin: proxy 172.20.20.10
2015/07/01 22:33:02 [INFO] agent: Received event: member-join
|
- access to web1 and start serf
web1 will join a cluster and will be added in /etc/haproxy/haproxy.cfg on the HA Proxy.
$ vagrant ssh web1
$ cd /vagrant/
$ sudo serf agent -config-file=web.json -node web1 -bind 172.20.20.111
==> Starting Serf agent...
==> Starting Serf agent RPC...
==> Serf agent running!
Node name: 'web1'
Bind addr: '172.20.20.111:7946'
RPC addr: '127.0.0.1:7373'
Encrypted: false
Snapshot: false
Profile: lan
==> Joining cluster...(replay: false)
Join completed. Synced with 1 initial agents
==> Log data will now stream in as it occurs:
2015/07/01 22:39:38 [INFO] agent: Serf agent starting
2015/07/01 22:39:38 [INFO] serf: EventMemberJoin: web1 172.20.20.111
2015/07/01 22:39:38 [INFO] agent: joining: [172.20.20.10] replay: false
2015/07/01 22:39:38 [INFO] serf: EventMemberJoin: proxy 172.20.20.10
2015/07/01 22:39:38 [INFO] agent: joined: 1 nodes
2015/07/01 22:39:39 [INFO] agent: Received event: member-join
|
- on the proxy
confirm that serf on the proxy received a event member-join and added web1 for load balancing.
2015/07/01 22:39:39 [INFO] serf: EventMemberJoin: web1 172.20.20.111
2015/07/01 22:39:40 [INFO] agent: Received event: member-join
|
$ cat /etc/haproxy/haproxy.cfg
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen stats
bind *:9999
mode http
stats enable
stats uri /
stats refresh 1s
listen http-in
bind *:80
balance roundrobin
option http-server-close
server web1 172.20.20.111:80 check
|
- access to the web2
start serf
$ vagrant ssh web2
$ cd /vagrant
$ sudo serf agent -config-file=web.json -node web2 -bind 172.20.20.112
==> Starting Serf agent...
==> Starting Serf agent RPC...
==> Serf agent running!
Node name: 'web2'
Bind addr: '172.20.20.112:7946'
RPC addr: '127.0.0.1:7373'
Encrypted: false
Snapshot: false
Profile: lan
==> Joining cluster...(replay: false)
Join completed. Synced with 1 initial agents
==> Log data will now stream in as it occurs:
2015/07/01 22:48:57 [INFO] agent: Serf agent starting
2015/07/01 22:48:57 [INFO] serf: EventMemberJoin: web2 172.20.20.112
2015/07/01 22:48:57 [INFO] agent: joining: [172.20.20.10] replay: false
2015/07/01 22:48:57 [INFO] serf: EventMemberJoin: web1 172.20.20.111
2015/07/01 22:48:57 [INFO] serf: EventMemberJoin: proxy 172.20.20.10
2015/07/01 22:48:57 [INFO] agent: joined: 1 nodes
2015/07/01 22:48:58 [INFO] agent: Received event: member-join
|
- on the proxy
web2 has been added on the proxy.
2015/07/01 22:48:57 [INFO] serf: EventMemberJoin: web2 172.20.20.112
2015/07/01 22:48:58 [INFO] agent: Received event: member-join
|
$ cat /etc/haproxy/haproxy.cfg
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen stats
bind *:9999
mode http
stats enable
stats uri /
stats refresh 1s
listen http-in
bind *:80
balance roundrobin
option http-server-close
server web1 172.20.20.111:80 check
server web2 172.20.20.112:80 check
|
- stop web2
$ vagrant halt web2
==> web2: Attempting graceful shutdown of VM…
$ vagrant status
Current machine states:
proxy running (virtualbox)
web1 running (virtualbox)
web2 poweroff (virtualbox)
|
on the proxy
serf on the proxy received a event ‘member-failed’
2015/07/01 22:56:44 [INFO] memberlist: Suspect web2 has failed, no acks received
2015/07/01 22:56:46 [INFO] memberlist: Suspect web2 has failed, no acks received
2015/07/01 22:56:47 [INFO] memberlist: Suspect web2 has failed, no acks received
2015/07/01 22:56:49 [INFO] memberlist: Marking web2 as failed, suspect timeout reached
2015/07/01 22:56:49 [INFO] serf: EventMemberFailed: web2 172.20.20.112
2015/07/01 22:56:50 [INFO] memberlist: Suspect web2 has failed, no acks received
2015/07/01 22:56:50 [INFO] agent: Received event: member-failed
2015/07/01 22:57:01 [INFO] serf: attempting reconnect to web2 172.20.20.112:7946
|
check haproxy.cfg.
web2 has not removed ..
listen http-in
bind *:80
balance roundrobin
option http-server-close
server web1 172.20.20.111:80 check
server web2 172.20.20.112:80 check
|
check handler.rb on the proxy ( /vagrant/handler.rb )
when 'member-join'
File.open(CONFIGFILE,"a") do |f|
f.puts " server #{info[:node]} #{info[:ip]}:80 check"
end
when 'member-leave'
target = " server #{info[:node]} #{info[:ip]}:80 check"
FileUtils.rm(TMP_CONFIGFILE) if File.exist?(TMP_CONFIGFILE)
File.open(TMP_CONFIGFILE,"w") do |f|
File.open(CONFIGFILE,"r").each do |line|
next if line.chomp == target
f.write(line)
end
end
FileUtils.mv(TMP_CONFIGFILE, CONFIGFILE)
end
|
Seen from ruby script, if serf on the proxy receives a ‘member-leave’ event, serf will remove that node, however, in my case, a event was ‘member-failed’. That’s why web2 has not removed.
- stop serf on web1 by Ctrl + C to send a ‘member-leave’ event.
on web1, stop serf by Ctrl + C
^C==> Caught signal: interrupt
==> Gracefully shutting down agent...
2015/07/02 22:14:48 [INFO] agent: requesting graceful leave from Serf
2015/07/02 22:14:48 [INFO] serf: EventMemberLeave: web1 172.20.20.111
2015/07/02 22:14:48 [INFO] agent: requesting serf shutdown
2015/07/02 22:14:48 [INFO] agent: shutdown complete
web1#
|
on the proxy.
2015/07/02 22:14:48 [INFO] serf: EventMemberLeave: web1 172.20.20.111
2015/07/02 22:14:49 [INFO] agent: Received event: member-leave
|
web1 has been removed
# grep server /etc/haproxy/haproxy.cfg
timeout server 50000ms
option http-server-close
server web2 172.20.20.112:80 check
|
run serf on web1 again.
# sudo serf agent -config-file=web.json -node web1 -bind 172.20.20.111
|
on the proxy.
web1 has been added again.
# grep server /etc/haproxy/haproxy.cfg
timeout server 50000ms
option http-server-close
server web2 172.20.20.112:80 check
server web1 172.20.20.111:80 check
|
By default, serf uses both TCP 7946 and UDP 7946 to communicate with other nodes and use TCP 7373 for RPC. RPC is used by other serf commands.
# sudo lsof -np 1789 | egrep -i 'udp|tcp'
serf 1789 root 3u IPv4 12112 0t0 TCP 172.20.20.10:7946 (LISTEN)
serf 1789 root 5u IPv4 12113 0t0 UDP 172.20.20.10:7946
serf 1789 root 6u IPv4 12114 0t0 TCP 127.0.0.1:7373 (LISTEN)
|
tcpdump on the proxy
# sudo tcpdump -n -i eth1 host 172.20.20.111 or host 172.20.20.112 and port 7946
22:28:07.125456 IP 172.20.20.112.7946 > 172.20.20.10.7946: UDP, length 35
22:28:07.139128 IP 172.20.20.10.7946 > 172.20.20.112.7946: UDP, length 35
22:28:07.536092 IP 172.20.20.10.7946 > 172.20.20.112.7946: UDP, length 21
22:28:07.536773 IP 172.20.20.112.7946 > 172.20.20.10.7946: UDP, length 11
22:28:07.903947 IP 172.20.20.111.33691 > 172.20.20.10.7946: Flags [S], seq 4178137503, win 29200, options [mss 1460,sackOK,TS val 344984 ecr 0,nop,wscale 6], length 0
22:28:07.904035 IP 172.20.20.10.7946 > 172.20.20.111.33691: Flags [S.], seq 823688400, ack 4178137504, win 28960, options [mss 1460,sackOK,TS val 351707 ecr 344984,nop,wscale 6], length 0
22:28:07.904332 IP 172.20.20.111.33691 > 172.20.20.10.7946: Flags [.], ack 1, win 457, options [nop,nop,TS val 344984 ecr 351707], length 0
22:28:07.904877 IP 172.20.20.111.33691 > 172.20.20.10.7946: Flags [P.], seq 1:227, ack 1, win 457, options [nop,nop,TS val 344984 ecr 351707], length 226
22:28:07.904900 IP 172.20.20.10.7946 > 172.20.20.111.33691: Flags [.], ack 227, win 470, options [nop,nop,TS val 351707 ecr 344984], length 0
22:28:07.906645 IP 172.20.20.10.7946 > 172.20.20.111.33691: Flags [P.], seq 1:330, ack 227, win 470, options [nop,nop,TS val 351707 ecr 344984], length 329
22:28:07.906687 IP 172.20.20.111.33691 > 172.20.20.10.7946: Flags [.], ack 330, win 473, options [nop,nop,TS val 344984 ecr 351707], length 0
22:28:07.906877 IP 172.20.20.10.7946 > 172.20.20.111.33691: Flags [F.], seq 330, ack 227, win 470, options [nop,nop,TS val 351707 ecr 344984], length 0
22:28:07.907021 IP 172.20.20.111.33691 > 172.20.20.10.7946: Flags [F.], seq 227, ack 331, win 473, options [nop,nop,TS val 344984 ecr 351707], length 0
22:28:07.907040 IP 172.20.20.10.7946 > 172.20.20.111.33691: Flags [.], ack 228, win 470, options [nop,nop,TS val 351707 ecr 344984], length 0
|
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.