Differences between revisions 1 and 10 (spanning 9 versions)
Revision 1 as of 2017-12-25 15:49:34
Size: 4213
Comment:
Revision 10 as of 2017-12-25 16:29:27
Size: 5128
Comment:
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:

When running two GlusterFS servers as a replica, there is a slight chance that a "split brain" situation will occour. To get around this a third GlusterFS server is configured as an arbiter. The arbiter does not have a copy of the data from the other GlusterFS servers, but it has all the metadata. This way the arbiter can help the other servers avoid the "split brain" scenario, without using the diskspace for the data.

In this setup we will configure 2 GlusterFS servers as a replica and one as the arbiter.
Line 7: Line 11:
 * 192.168.1.44 gluster05
 * 192.168.1.45 gluster06
 * 192.168.1.46 gluster07
 * 192.168.1.44 gluster05 (replica)
 * 192.168.1.45 gluster06 (replica)
 * 192.168.1.46 gluster07 (arbiter)
Line 14: Line 18:
To keep things separated the OS will be on one disk and the glusterfs filesystem will be on another disk. We will [[Domain Editing|add a disk]] to the domain. This disk will be used for the glusterfs file storage. To keep things separated the OS will be on one disk and the glusterfs filesystem will be on another disk. For two of the three servers we will [[Domain Editing|add a disk]] to the domain. This disk will be used for the GlusterFS file storage.
Line 22: Line 26:
Set up a new physical volume, configure a volume group and add a volume Set up a new physical volume, configure a volume group.
Line 26: Line 30:
lvcreate --size 4G --name gluster_www vg2 }}}
===== Replica =====
Add a volume
{{{
lvcreate --size 4G --name gluster_home vg2
}}}
===== Arbiter =====
Add a volume
{{{
lvcreate --size 512M --name gluster_home vg2
Line 32: Line 45:
mkfs.btrfs /dev/vg2/gluster_www mkfs.btrfs /dev/vg2/gluster_home
Line 38: Line 51:
mkdir /srv/www mkdir /srv/home
Line 44: Line 57:
/dev/vg2/gluster_www /srv/www btrfs defaults 0 0 /dev/vg2/gluster_www /srv/home btrfs defaults 0 0
Line 50: Line 63:
mount /srv/www mount /srv/home
Line 59: Line 72:
''volume create: www: failed: The brick gluster01:/srv/www is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.'' ''volume create: home: failed: The brick gluster01:/srv/home is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.''
Line 63: Line 76:
mkdir /srv/www/brick mkdir /srv/home/brick
Line 73: Line 86:
The glusterfs servers need to now each other. This can be done from any one server. Here we will do it from gluster01. We need to introduce the three other servers. The GlusterFS servers need to now each other. This can be done from any one server. Here we will do it from gluster05. We need to introduce the two other servers.
Line 75: Line 88:
gluster peer probe gluster02
gluster peer probe gluster03
gluster peer probe gluster04
gluster peer probe gluster06
gluster peer probe gluster07
Line 86: Line 98:
Number of Peers: 3 Number of Peers: 2
Line 88: Line 100:
Hostname: gluster02
Uuid: 031573c2-3b1f-4946-bd78-421563249db6
Hostname: gluster06
Uuid: c041f1eb-72b5-4737-b25e-4f68c3379ef1
Line 92: Line 104:
Hostname: gluster03
Uuid: ff5cec1c-6d7f-4db7-8676-08deff06b4d0
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 65fc398a-52e9-4292-9bbb-884becfbf5d6
Hostname: gluster07
Uuid: a03ea1dc-5e74-4d1c-aa80-7d0386bce1e7
Line 104: Line 112:
gluster volume create www replica 4 transport tcp gluster01:/srv/www/brick gluster02:/srv/www/brick gluster03:/srv/www/brick gluster04:/srv/www/brick gluster volume create home replica 3 arbiter 1 gluster05:/srv/home/brick gluster06:/srv/home/brick gluster07:/srv/home/brick
Line 110: Line 118:
gluster volume start www gluster volume start home
Line 112: Line 120:
Line 118: Line 127:
Status of volume: www Status of volume: home
Line 121: Line 130:
Brick gluster01:/srv/www 49152 0 Y 826
Brick gluster02:/srv/www 49152 0 Y 1355
Brick gluster03:/srv/www 49152 0 Y 1034
Brick gluster04:/srv/www 49152 0 Y 1135
Self-heal Daemon on localhost N/A N/A Y 846
Self-heal Daemon on gluster02 N/A N/A Y 1377
Self-heal Daemon on gluster03 N/A N/A Y 1054
Self-heal Daemon on gluster04 N/A N/A Y 1155
Brick gluster05:/srv/home/brick 49152 0 Y 808
Brick gluster06:/srv/home/brick 49152 0 Y 764
Brick gluster07:/srv/home/brick 49152 0 Y 769
Self-heal Daemon on localhost N/A N/A Y 828
Self-heal Daemon on gluster06 N/A N/A Y 785
Self-heal Daemon on gluster07 N/A N/A Y 791
Line 130: Line 137:
Task Status of Volume www Task Status of Volume home
Line 134: Line 141:

=== Volume Information ===
Lets check that one of the volumes is an arbiter
{{{
gluster volume info
}}}
{{{
Volume Name: home
Type: Replicate
Volume ID: 26ad72e2-868b-44f1-b51f-24a6eb5f380b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster05:/srv/home/brick
Brick2: gluster06:/srv/home/brick
Brick3: gluster07:/srv/home/brick (arbiter)
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
}}}

GlusterFS Server and Arbiter

When running two GlusterFS servers as a replica, there is a slight chance that a "split brain" situation will occour. To get around this a third GlusterFS server is configured as an arbiter. The arbiter does not have a copy of the data from the other GlusterFS servers, but it has all the metadata. This way the arbiter can help the other servers avoid the "split brain" scenario, without using the diskspace for the data.

In this setup we will configure 2 GlusterFS servers as a replica and one as the arbiter.

Network

The GlusterFS servers will have fixed IP-addresses. That is configured in the DHCP servers list of statically assigned IP-adresses by using the Domains MAC address.

  • 192.168.1.44 gluster05 (replica)
  • 192.168.1.45 gluster06 (replica)
  • 192.168.1.46 gluster07 (arbiter)

Disk

Add hardware

To keep things separated the OS will be on one disk and the glusterfs filesystem will be on another disk. For two of the three servers we will add a disk to the domain. This disk will be used for the GlusterFS file storage.

Configure disk

Partition

Create one large partition on /dev/vdb with partition type 8e Linux LVM.

LVM

Set up a new physical volume, configure a volume group.

pvcreate /dev/vdb1
vgcreate vg2 /dev/vdb1

Replica

Add a volume

lvcreate --size 4G --name gluster_home vg2

Arbiter

Add a volume

lvcreate --size 512M --name gluster_home vg2

Format

Create a filesystem on the volume.

mkfs.btrfs /dev/vg2/gluster_home

Mountpoint

Create the mountpoint.

mkdir /srv/home

fstab

Add the volume to /etc/fstab.

/dev/vg2/gluster_www /srv/home               btrfs   defaults        0       0

Mount

Mount the new volume.

mount /srv/home

If the mount command does not succeed, it it most likely because the fstab entry is incorrect.

Software

Mount Point

Glusterfs is not happy about using a directory which is also a mountpoint.

volume create: home: failed: The brick gluster01:/srv/home is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.

If for some reason the filesystem is not mounted, then glusterfs might misunderstand the situation and tell the other servers that the directory is now empty, and all files would be deleted on all servers. To avoid this a directory is created under the mount point.

mkdir /srv/home/brick

Install

Install the server.

apt-get install glusterfs-server

Introduce Servers

The GlusterFS servers need to now each other. This can be done from any one server. Here we will do it from gluster05. We need to introduce the two other servers.

gluster peer probe gluster06
gluster peer probe gluster07

Server Status

Now check that the servers were properly probed.

gluster peer status

Number of Peers: 2

Hostname: gluster06
Uuid: c041f1eb-72b5-4737-b25e-4f68c3379ef1
State: Peer in Cluster (Connected)

Hostname: gluster07
Uuid: a03ea1dc-5e74-4d1c-aa80-7d0386bce1e7
State: Peer in Cluster (Connected)

Create Volume

Create the gluster volume.

gluster volume create home replica 3 arbiter 1 gluster05:/srv/home/brick gluster06:/srv/home/brick gluster07:/srv/home/brick

Start Volume

Finally we can start the volume.

gluster volume start home

Volume Status

Check the status of the volume.

gluster volume status

Status of volume: home
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster05:/srv/home/brick             49152     0          Y       808  
Brick gluster06:/srv/home/brick             49152     0          Y       764  
Brick gluster07:/srv/home/brick             49152     0          Y       769  
Self-heal Daemon on localhost               N/A       N/A        Y       828  
Self-heal Daemon on gluster06               N/A       N/A        Y       785  
Self-heal Daemon on gluster07               N/A       N/A        Y       791  
 
Task Status of Volume home
------------------------------------------------------------------------------
There are no active volume tasks

Volume Information

Lets check that one of the volumes is an arbiter

gluster volume info

Volume Name: home
Type: Replicate
Volume ID: 26ad72e2-868b-44f1-b51f-24a6eb5f380b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster05:/srv/home/brick
Brick2: gluster06:/srv/home/brick
Brick3: gluster07:/srv/home/brick (arbiter)
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

None: GlusterFS Server with Arbiter (last edited 2021-03-26 21:30:13 by Kristian Kallenberg)