Self

GlusterFS Server with Arbiter

When running two GlusterFS servers as a replica, there is a slight chance that a "split brain" situation will occour. To get around this a third GlusterFS server is configured as an arbiter. The arbiter does not have a copy of the data from the other GlusterFS servers, but it has all the metadata. This way the arbiter can help the other servers avoid the "split brain" scenario, without using the diskspace for the data.

In this setup we will configure 2 GlusterFS servers as a replica and 1 as the arbiter.

Network

The GlusterFS servers will have fixed IP-addresses. That is configured in the DHCP servers list of statically assigned IP-adresses by using the Domains MAC address.

Disk

Add hardware

To keep things separated the OS will be on one disk and the glusterfs filesystem will be on another disk. For two of the three servers we will add a disk to the domain. This disk will be used for the GlusterFS file storage.

Configure disk

Partition

Create one large partition on /dev/vdb with partition type 8e Linux LVM.

LVM

Set up a new physical volume, configure a volume group.

pvcreate /dev/vdb1
vgcreate vg_data /dev/vdb1

Replica

Add a volume

lvcreate --size 4G --name gluster_home vg_data

Arbiter

Add a volume

lvcreate --size 256M --name gluster_home vg_data

Format

Create a filesystem on the volume.

Replica

mkfs.ext4 /dev/vg_data/gluster_home

Arbiter

Make sure to add enough inodes on the arbiter. The filesystem for the arbiter is small as it has all the files stored of size 0, but it still needs an inode per file. There is a default filesystem size versus number of inodes ratio, and this ratio can be changed by using -N to set a fixed number of inodes.

mkfs.ext4 -N 131072 /dev/vg_data/gluster_home

Mountpoint

Create the mountpoint.

mkdir /srv/home

fstab

Add the volume to /etc/fstab.

/dev/vg_data/gluster_home /srv/home        ext4   defaults        0       0

Mount

Mount the new volume.

mount /srv/home

If the mount command does not succeed, it it most likely because the fstab entry is incorrect.

Software

Mount Point

Glusterfs is not happy about using a directory which is also a mountpoint.

volume create: home: failed: The brick gluster01:/srv/home is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.

If for some reason the filesystem is not mounted, then glusterfs might misunderstand the situation and tell the other servers that the directory is now empty, and all files would be deleted on all servers. To avoid this a directory is created under the mount point.

mkdir /srv/home/brick

Install

Install the server.

apt-get install glusterfs-server

If you are running Buster, then make sure the GlusterFS server starts automatically and start it

systemctl enable glusterd.service
service glusterd start

Introduce Servers

The GlusterFS servers need to now each other. This can be done from any one server. Here we will do it from gluster01. We need to introduce the two other servers.

gluster peer probe gluster02
gluster peer probe gluster03

Server Status

Now check that the servers were properly probed.

gluster peer status

Number of Peers: 2

Hostname: gluster02
Uuid: c041f1eb-72b5-4737-b25e-4f68c3379ef1
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: a03ea1dc-5e74-4d1c-aa80-7d0386bce1e7
State: Peer in Cluster (Connected)

Create Volume

Create the gluster volume.

gluster volume create home replica 3 arbiter 1 gluster01:/srv/home/brick gluster02:/srv/home/brick gluster03:/srv/home/brick

Persistent Metadata

Enable consistent metadata for the volume.

gluster volume set home cluster.consistent-metadata on
gluster volume set home features.utime on

Start Volume

Finally we can start the volume.

gluster volume start home

Volume Status

Check the status of the volume.

gluster volume status

Status of volume: home
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01:/srv/home/brick             49152     0          Y       808  
Brick gluster02:/srv/home/brick             49152     0          Y       764  
Brick gluster03:/srv/home/brick             49152     0          Y       769  
Self-heal Daemon on localhost               N/A       N/A        Y       828  
Self-heal Daemon on gluster02               N/A       N/A        Y       785  
Self-heal Daemon on gluster03               N/A       N/A        Y       791  
 
Task Status of Volume home
------------------------------------------------------------------------------
There are no active volume tasks

Volume Information

Check that one of the servers is an arbiter and that it is the correct server.

gluster volume info

Volume Name: home
Type: Replicate
Volume ID: 26ad72e2-868b-44f1-b51f-24a6eb5f380b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster01:/srv/home/brick
Brick2: gluster02:/srv/home/brick
Brick3: gluster03:/srv/home/brick (arbiter)
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

References

None: GlusterFS Server with Arbiter (last edited 2021-03-26 21:30:13 by Kristian Kallenberg)