Differences between revisions 27 and 77 (spanning 50 versions)
Revision 27 as of 2018-01-20 15:28:21
Size: 3126
Comment:
Revision 77 as of 2020-02-08 19:58:48
Size: 4603
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
Incomplete, do not use!
Line 5: Line 3:
We will use corosync to create a virtual IP-address which is shared by two systems. One of the systems has the virtual IP-address. If the system goes down, the other system will take over the virtual IP-address. HA-proxy will be running on each of the hosts relaying incoming HTTP-requests to two webservers. We will use corosync to create a virtual IP-address which is shared by two systems. One of the systems has the virtual IP-address. If the system goes down, the other system will take over the virtual IP-address. HA-proxy will be running on each of the hosts relaying incoming requests.
Line 7: Line 5:
 * 192.168.1.47 www (virtual IP-address)
 * 192.168.1.48 haproxy01
* 192.168.1.49 haproxy02
 * 192.168.1.49 haproxy01
 * 192.168.1.50 haproxy02
 * 192.168.1.51
www (virtual IP-address)
 * 192.168.1.54 mariadb (virtual IP)
Line 19: Line 18:
=== HA-proxy ===

==== HA-proxy resource script ====
Download the HA-proxy resource script, place it in `/usr/lib/ocf/resource.d/heartbeat/haproxy` and make it executeable.
{{{
wget https://raw.githubusercontent.com/thisismitch/cluster-agents/master/haproxy -O /usr/lib/ocf/resource.d/heartbeat/haproxy
chmod +x /usr/lib/ocf/resource.d/heartbeat/haproxy
}}}

As we will let corosync start haproxy, we will have to disable haproxy as a service.
{{{
service haproxy stop
systemctl disable haproxy
}}}

==== Webfarm ====
Add the following to `/etc/haproxy/haproxy.cfg`
{{{
listen webfarm
        bind www:80
        mode http
        balance roundrobin
        cookie LBN insert indirect nocache
        option httpclose
        option forwardfor
        server haproxy01 www01:80 cookie node1 check
        server haproxy02 www02:80 cookie node2 check
}}}
Copy the configuration `/etc/haproxy/haproxy.cfg` to the other host.
Line 25: Line 54:
        cluster_name: www
Line 30: Line 60:
==== Certificates ====
Generate the certificates. Note that this requires entropy, so use your system and the entropy will be generated. Speed this up by running `dd if=/dev/vda of=/dev/null` in another terminal.
==== Authentication Key ====
Generate the authkey. Note that this requires entropy, so use your system and the entropy will be generated. Speed this up by running `dd if=/dev/vda of=/dev/null` in another terminal or install `haveged`.
Line 44: Line 74:
Copy the configuration `/etc/corosync/corosync.conf` and the authkey `/etc/corosync/authkey` to the other host.
Line 45: Line 76:
Restart corosync to load the new configuration. Restart corosync on both hosts to load the new configuration.
Line 50: Line 81:
The two nodes should now be configured to see each other. Run `crm status` to check.
{{{
Stack: corosync
Current DC: haproxy01 (version 1.1.16-94ff4df) - partition with quorum
Last updated: Sat Feb 1 14:41:09 2020
Last change: Sat Feb 1 14:40:51 2020 by hacluster via crmd on haproxy01

2 nodes configured
0 resources configured

Online: [ haproxy01 haproxy02 ]

No resources
}}}
Line 51: Line 97:
THIS WORKS
Do not kill the other node.
Do not kill the other node. This is needed as inactive nodes normally should be killed. It can be change later on, once fencing has been configured.
Line 56: Line 101:

THIS HAS TO BE TESTED
Disable quorum due to two nodes only. This will make sure that the cluster keeps running if only one node is up.
Line 59: Line 103:
# disable quorum due to two nodes only
Line 61: Line 104:
# configure shared IP
crm configure primitive haproxySharedIP  ocf:heartbeat:IPaddr2 params ip=192.168.1.47 cidr_netmask=24 op monitor interval=5s
# create heartbeat for haproxy
crm configure primitive haproxyLoadBalance ocf:heartbeat:haproxy params conffile=/etc/haproxy/haproxy.cfg op monitor interval=10s

# m
ake sure the same server has both IP and service
#
crm configure group haproxyIPs haproxySharedIP meta ordered=false
crm configure group haproxy haproxySharedIP haproxyLoadBalance
# relation between IP and haproxy servers
crm configure colocation haproxyWithIPs INFINITY: haproxyLoadBalance haproxySharedIP
# IP should be up before haproxy starts

crm configure order haproxyAfterIPs mandatory: haproxySharedIP haproxyLoadBalance  #wget http://github.com/russki/cluster-agents/raw/master/haproxy
#echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf
}}}
C
onfigure shared IP-address. This is the virtual IP-address.
{{{

crm configure primitive wwwSharedIP ocf:heartbeat:IPaddr2 params ip=192.168.1.51 cidr_netmask=24 op monitor interval=5s
}}}
C
reate heartbeat for haproxy.
{{{

crm configure primitive wwwLoadBalance ocf:heartbeat:haproxy params conffile=/etc/haproxy/haproxy.cfg op monitor interval=10s
}}}
M
ake sure the same server has both IP and service at the same time.
{{{
crm configure group www wwwSharedIP wwwLoadBalance
}}}
IP-address should be up before
haproxy starts.
{{{

crm configure order haproxyAfterIP mandatory: wwwSharedIP wwwLoadBalance
Line 78: Line 122:


=== HA-proxy ===
Add the following to `/etc/haproxy/haproxy.cfg`
== Test it ==
Line 83: Line 124:
listen webfarm
        bind www:80
        mode http
        balance roundrobin
        cookie LBN insert indirect nocache
        option httpclose
        option forwardfor
        server haproxy01 www01:80 cookie node1 check
        server haproxy02 www02:80 cookie node2 check
while true; do wget --quiet http://www/ -O /dev/stdout; sleep 1; done
Line 94: Line 127:
Restart HA-proxy
{{{
service haproxy restart
}}}



Line 103: Line 128:
  * https://wiki.debian.org/Debian-HA/ClustersFromScratch  * https://wiki.debian.org/Debian-HA/ClustersFromScratch
 * https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-cluster-options.html
 * https://www.digitalocean.com/community/tutorials/how-to-create-a-high-availability-setup-with-corosync-pacemaker-and-floating-ips-on-ubuntu-14-04
 * https://www.howtoforge.com/tutorial/how-to-setup-haproxy-as-load-balancer-for-mariadb-on-centos-7/#b-install-mariadb-galera-server

Corosync and HA-proxy

We will use corosync to create a virtual IP-address which is shared by two systems. One of the systems has the virtual IP-address. If the system goes down, the other system will take over the virtual IP-address. HA-proxy will be running on each of the hosts relaying incoming requests.

  • 192.168.1.49 haproxy01
  • 192.168.1.50 haproxy02
  • 192.168.1.51 www (virtual IP-address)
  • 192.168.1.54 mariadb (virtual IP)

Software

apt-get install corosync haproxy crmsh

Configuration

Before binding to the DNS name www make sure the DNS server knows that name. There is no physical or virtual network interfaces, so the DNS updates has to be done manually.

HA-proxy

HA-proxy resource script

Download the HA-proxy resource script, place it in /usr/lib/ocf/resource.d/heartbeat/haproxy and make it executeable.

wget https://raw.githubusercontent.com/thisismitch/cluster-agents/master/haproxy -O /usr/lib/ocf/resource.d/heartbeat/haproxy
chmod +x /usr/lib/ocf/resource.d/heartbeat/haproxy

As we will let corosync start haproxy, we will have to disable haproxy as a service.

service haproxy stop
systemctl disable haproxy

Webfarm

Add the following to /etc/haproxy/haproxy.cfg

listen webfarm 
        bind www:80
        mode http
        balance roundrobin
        cookie LBN insert indirect nocache
        option httpclose
        option forwardfor
        server haproxy01 www01:80 cookie node1 check
        server haproxy02 www02:80 cookie node2 check

Copy the configuration /etc/haproxy/haproxy.cfg to the other host.

Corosync

Encryption

Edit /etc/corosync/corosync.conf and make the following changes.

totem {
        cluster_name: www
        crypto_cipher: aes256
        crypto_hash: sha512
}

Authentication Key

Generate the authkey. Note that this requires entropy, so use your system and the entropy will be generated. Speed this up by running dd if=/dev/vda of=/dev/null in another terminal or install haveged.

corosync-keygen

Network

Add the local network and a multicast address to /etc/corosync/corosync.conf in the interface section.

interface {
        bindnetaddr: 192.168.1.0 
        mcastaddr: 239.192.1.1
}

Copy the configuration /etc/corosync/corosync.conf and the authkey /etc/corosync/authkey to the other host.

Restart corosync on both hosts to load the new configuration.

service corosync restart

The two nodes should now be configured to see each other. Run crm status to check.

Stack: corosync
Current DC: haproxy01 (version 1.1.16-94ff4df) - partition with quorum
Last updated: Sat Feb  1 14:41:09 2020
Last change: Sat Feb  1 14:40:51 2020 by hacluster via crmd on haproxy01

2 nodes configured
0 resources configured

Online: [ haproxy01 haproxy02 ]

No resources

Nodes

Do not kill the other node. This is needed as inactive nodes normally should be killed. It can be change later on, once fencing has been configured.

crm configure property stonith-enabled=false

Disable quorum due to two nodes only. This will make sure that the cluster keeps running if only one node is up.

crm configure property no-quorum-policy=ignore

Configure shared IP-address. This is the virtual IP-address.

crm configure primitive wwwSharedIP ocf:heartbeat:IPaddr2 params ip=192.168.1.51 cidr_netmask=24 op monitor interval=5s

Create heartbeat for haproxy.

crm configure primitive wwwLoadBalance ocf:heartbeat:haproxy params conffile=/etc/haproxy/haproxy.cfg op monitor interval=10s

Make sure the same server has both IP and service at the same time.

crm configure group www wwwSharedIP wwwLoadBalance

IP-address should be up before haproxy starts.

crm configure order haproxyAfterIP mandatory: wwwSharedIP wwwLoadBalance

Test it

while true; do wget --quiet http://www/ -O /dev/stdout; sleep 1; done

References

None: Corosync and HA-proxy (last edited 2021-08-12 09:31:08 by Kristian Kallenberg)