2455
Comment:
|
5919
|
Deletions are marked like this. | Additions are marked like this. |
Line 5: | Line 5: |
* 192.168.1.47 www (virtual IP-address) * 192.168.1.48 haproxy01 * 192.168.1.49 haproxy02 |
* 192.168.1.49 www (virtual IP-address) * 192.168.1.50 haproxy01 * 192.168.1.51 haproxy02 |
Line 15: | Line 15: |
Before binding to the DNS name `www` make sure the DNS server knows that name. |
Before binding to the DNS name `www` make sure the DNS server knows that name. There is no physical or virtual network interfaces, so the [[DNS Updates|DNS updates]] has to be done manually. === Corosync === ==== Encryption ==== Edit `/etc/corosync/corosync.conf` and make the following changes. {{{ totem { cluster_name: www crypto_cipher: aes256 crypto_hash: sha512 } }}} ==== Authentication Key ==== Generate the authkey. Note that this requires entropy, so use your system and the entropy will be generated. Speed this up by running `dd if=/dev/vda of=/dev/null` in another terminal. {{{ corosync-keygen }}} ==== Network ==== Add the local network and a multicast address to `/etc/corosync/corosync.conf` in the `interface` section. {{{ interface { bindnetaddr: 192.168.1.0 mcastaddr: 239.192.1.1 } }}} Copy the configuration `/etc/corosync/corosync.conf` and the authkey `/etc/corosync/authkey` to the other host. Restart corosync on both hosts to load the new configuration. {{{ service corosync restart }}} ---- /!\ '''Edit conflict - other version:''' ---- ---- /!\ '''Edit conflict - your version:''' ---- The two nodes should now be configured to see each other. Run `crm status` to check. {{{ Stack: corosync Current DC: haproxy01 (version 1.1.16-94ff4df) - partition with quorum Last updated: Sat Feb 1 14:41:09 2020 Last change: Sat Feb 1 14:40:51 2020 by hacluster via crmd on haproxy01 2 nodes configured 0 resources configured Online: [ haproxy01 haproxy02 ] No resources }}} ---- /!\ '''End of edit conflict''' ---- |
Line 18: | Line 71: |
==== HA-proxy resource script ==== Download the HA-proxy resource script, place it in `/usr/lib/ocf/resource.d/heartbeat/haproxy` and make it executeable. {{{ wget https://raw.githubusercontent.com/thisismitch/cluster-agents/master/haproxy -O /usr/lib/ocf/resource.d/heartbeat/haproxy chmod +x /usr/lib/ocf/resource.d/heartbeat/haproxy }}} This has to be done on both hosts. ==== Webfarm ==== |
|
Line 30: | Line 94: |
Copy the configuration `/etc/haproxy/haproxy.cfg` to the other host. | |
Line 31: | Line 96: |
Restart HA-proxy | Restart HA-proxy on both hosts to load the new configuration. |
Line 36: | Line 101: |
=== Corosync === Add the local network and a multicast address to `/etc/corosync/corosync.conf` in the `interface` section |
The two nodes should now see each other. Run `crm status` to check both nodes are online. |
Line 39: | Line 103: |
interface { bindnetaddr: 192.168.1.0 mcastaddr: 239.192.1.1 } |
Stack: corosync Current DC: haproxy01 (version 1.1.16-94ff4df) - partition with quorum Last updated: Sat Feb 1 14:41:09 2020 Last change: Sat Feb 1 14:40:51 2020 by hacluster via crmd on haproxy01 2 nodes configured 0 resources configured Online: [ haproxy01 haproxy02 ] No resources |
Line 45: | Line 116: |
Restart corosync to load the new configuration. | ---- /!\ '''Edit conflict - other version:''' ---- The two nodes should now see each other. Run `crm status` to check both nodes are online. |
Line 47: | Line 120: |
service corosync restart | Stack: corosync Current DC: haproxy01 (version 1.1.16-94ff4df) - partition with quorum Last updated: Sat Feb 1 14:41:09 2020 Last change: Sat Feb 1 14:40:51 2020 by hacluster via crmd on haproxy01 2 nodes configured 0 resources configured Online: [ haproxy01 haproxy02 ] No resources |
Line 50: | Line 133: |
== Commmands == Run the following on startup {{{#!highlight bash # do not kill other node |
---- /!\ '''Edit conflict - your version:''' ---- ---- /!\ '''End of edit conflict''' ---- ==== Nodes ==== Do not kill the other node. {{{ |
Line 55: | Line 141: |
# disable quorum due to two nodes only | }}} Disable quorum due to two nodes only. This will make sure that the cluster keeps running if only one node is up. {{{ |
Line 57: | Line 145: |
# configure shared IP crm configure primitive haproxySharedIP ocf:heartbeat:IPaddr2 params ip=192.168.1.47 cidr_netmask=24 op monitor interval=5s # create heartbeat for haproxy crm configure primitive haproxyLoadBalance ocf:heartbeat:haproxy params conffile=/etc/haproxy/haproxy.cfg op monitor interval=10s |
}}} Configure shared IP-address. {{{ crm configure primitive haproxy_wwwSharedIP ocf:heartbeat:IPaddr2 params ip=192.168.1.49 cidr_netmask=24 op monitor interval=5s }}} Create heartbeat for haproxy. {{{ crm configure primitive haproxy_wwwLoadBalance ocf:heartbeat:haproxy params conffile=/etc/haproxy/haproxy.cfg op monitor interval=10s }}} Make sure the same server has both IP and service at the same time. {{{ crm configure group haproxy_www haproxy_wwwSharedIP haproxy_wwwLoadBalance }}} Create the relation between IP-address and haproxy servers. |
Line 62: | Line 160: |
# make sure the same server has both IP and service #crm configure group haproxyIPs haproxySharedIP meta ordered=false crm configure group haproxy haproxySharedIP haproxyLoadBalance # relation between IP and haproxy servers crm configure colocation haproxyWithIPs INFINITY: haproxyLoadBalance haproxySharedIP # IP should be up before haproxy starts crm configure order haproxyAfterIPs mandatory: haproxySharedIP haproxyLoadBalance #wget http://github.com/russki/cluster-agents/raw/master/haproxy #echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf |
currently gives: root@haproxy01:/etc/corosync# crm configure colocation haproxyWithIPs INFINITY: haproxy_wwwLoadBalance haproxy_wwwSharedIP |
Line 72: | Line 163: |
WARNING: haproxyWithIPs: resource haproxy_wwwLoadBalance is grouped, constraints should apply to the group WARNING: haproxyWithIPs: resource haproxy_wwwSharedIP is grouped, constraints should apply to the group) {{{ crm configure colocation haproxyWithIPs INFINITY: haproxy_wwwLoadBalance haproxy_wwwSharedIP |
|
Line 73: | Line 170: |
IP-address should be up before haproxy starts. {{{ crm configure order haproxyAfterIPs mandatory: haproxy_wwwSharedIP haproxy_wwwLoadBalance }}} ==== Start over ==== run this `rm /var/lib/heartbeat/crm/cib*` == Test it == {{{ while true; do wget --quiet http://www/ -O /dev/stdout; sleep 1; done }}} == References == * https://wiki.debian.org/Debian-HA/ClustersFromScratch * https://www.digitalocean.com/community/tutorials/how-to-create-a-high-availability-setup-with-corosync-pacemaker-and-floating-ips-on-ubuntu-14-04 |
Corosync and HA-proxy
We will use corosync to create a virtual IP-address which is shared by two systems. One of the systems has the virtual IP-address. If the system goes down, the other system will take over the virtual IP-address. HA-proxy will be running on each of the hosts relaying incoming HTTP-requests to two webservers.
- 192.168.1.49 www (virtual IP-address)
- 192.168.1.50 haproxy01
- 192.168.1.51 haproxy02
Software
apt-get install corosync haproxy crmsh
Configuration
Before binding to the DNS name www make sure the DNS server knows that name. There is no physical or virtual network interfaces, so the DNS updates has to be done manually.
Corosync
Encryption
Edit /etc/corosync/corosync.conf and make the following changes.
totem { cluster_name: www crypto_cipher: aes256 crypto_hash: sha512 }
Authentication Key
Generate the authkey. Note that this requires entropy, so use your system and the entropy will be generated. Speed this up by running dd if=/dev/vda of=/dev/null in another terminal.
corosync-keygen
Network
Add the local network and a multicast address to /etc/corosync/corosync.conf in the interface section.
interface { bindnetaddr: 192.168.1.0 mcastaddr: 239.192.1.1 }
Copy the configuration /etc/corosync/corosync.conf and the authkey /etc/corosync/authkey to the other host.
Restart corosync on both hosts to load the new configuration.
service corosync restart
Edit conflict - other version:
Edit conflict - your version:
The two nodes should now be configured to see each other. Run crm status to check.
Stack: corosync Current DC: haproxy01 (version 1.1.16-94ff4df) - partition with quorum Last updated: Sat Feb 1 14:41:09 2020 Last change: Sat Feb 1 14:40:51 2020 by hacluster via crmd on haproxy01 2 nodes configured 0 resources configured Online: [ haproxy01 haproxy02 ] No resources
End of edit conflict
HA-proxy
HA-proxy resource script
Download the HA-proxy resource script, place it in /usr/lib/ocf/resource.d/heartbeat/haproxy and make it executeable.
wget https://raw.githubusercontent.com/thisismitch/cluster-agents/master/haproxy -O /usr/lib/ocf/resource.d/heartbeat/haproxy chmod +x /usr/lib/ocf/resource.d/heartbeat/haproxy
This has to be done on both hosts.
Webfarm
Add the following to /etc/haproxy/haproxy.cfg
listen webfarm bind www:80 mode http balance roundrobin cookie LBN insert indirect nocache option httpclose option forwardfor server haproxy01 www01:80 cookie node1 check server haproxy02 www02:80 cookie node2 check
Copy the configuration /etc/haproxy/haproxy.cfg to the other host.
Restart HA-proxy on both hosts to load the new configuration.
service haproxy restart
The two nodes should now see each other. Run crm status to check both nodes are online.
Stack: corosync Current DC: haproxy01 (version 1.1.16-94ff4df) - partition with quorum Last updated: Sat Feb 1 14:41:09 2020 Last change: Sat Feb 1 14:40:51 2020 by hacluster via crmd on haproxy01 2 nodes configured 0 resources configured Online: [ haproxy01 haproxy02 ] No resources
Edit conflict - other version:
The two nodes should now see each other. Run crm status to check both nodes are online.
Stack: corosync Current DC: haproxy01 (version 1.1.16-94ff4df) - partition with quorum Last updated: Sat Feb 1 14:41:09 2020 Last change: Sat Feb 1 14:40:51 2020 by hacluster via crmd on haproxy01 2 nodes configured 0 resources configured Online: [ haproxy01 haproxy02 ] No resources
Edit conflict - your version:
End of edit conflict
Nodes
Do not kill the other node.
crm configure property stonith-enabled=false
Disable quorum due to two nodes only. This will make sure that the cluster keeps running if only one node is up.
crm configure property no-quorum-policy=ignore
Configure shared IP-address.
crm configure primitive haproxy_wwwSharedIP ocf:heartbeat:IPaddr2 params ip=192.168.1.49 cidr_netmask=24 op monitor interval=5s
Create heartbeat for haproxy.
crm configure primitive haproxy_wwwLoadBalance ocf:heartbeat:haproxy params conffile=/etc/haproxy/haproxy.cfg op monitor interval=10s
Make sure the same server has both IP and service at the same time.
crm configure group haproxy_www haproxy_wwwSharedIP haproxy_wwwLoadBalance
Create the relation between IP-address and haproxy servers.
currently gives: root@haproxy01:/etc/corosync# crm configure colocation haproxyWithIPs INFINITY: haproxy_wwwLoadBalance haproxy_wwwSharedIP
WARNING: haproxyWithIPs: resource haproxy_wwwLoadBalance is grouped, constraints should apply to the group
WARNING: haproxyWithIPs: resource haproxy_wwwSharedIP is grouped, constraints should apply to the group)
crm configure colocation haproxyWithIPs INFINITY: haproxy_wwwLoadBalance haproxy_wwwSharedIP
IP-address should be up before haproxy starts.
crm configure order haproxyAfterIPs mandatory: haproxy_wwwSharedIP haproxy_wwwLoadBalance
Start over
run this rm /var/lib/heartbeat/crm/cib*
Test it
while true; do wget --quiet http://www/ -O /dev/stdout; sleep 1; done