Size: 4639
Comment:
|
← Revision 17 as of 2021-03-26 21:19:41 ⇥
Size: 4229
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 7: | Line 7: |
* 192.168.1.40 gluster01 * 192.168.1.41 gluster02 * 192.168.1.42 gluster03 * 192.168.1.43 gluster04 |
* 192.168.1.42 gluster01 * 192.168.1.43 gluster02 * 192.168.1.44 gluster03 |
Line 26: | Line 25: |
vgcreate vg2 /dev/vdb1 lvcreate --size 4G --name gluster_www vg2 |
vgcreate vg_data /dev/vdb1 lvcreate --size 4G --name gluster_www vg_data |
Line 33: | Line 32: |
mkfs.btrfs /dev/vg2/gluster_www | mkfs.ext4 /dev/vg_data/gluster_www |
Line 45: | Line 44: |
/dev/vg2/gluster_www /srv/www btrfs defaults 0 0 | /dev/vg_data/gluster_www /srv/www ext4 defaults 0 0 |
Line 73: | Line 72: |
If you are running Buster, then make the server starts automatically and start it {{{ systemctl enable glusterd.service service glusterd start }}} |
|
Line 78: | Line 83: |
gluster peer probe gluster04 | |
Line 87: | Line 91: |
Number of Peers: 3 | Number of Peers: 2 |
Line 96: | Line 100: |
Hostname: gluster04 Uuid: 65fc398a-52e9-4292-9bbb-884becfbf5d6 State: Peer in Cluster (Connected) |
|
Line 105: | Line 105: |
gluster volume create www replica 4 transport tcp gluster01:/srv/www/brick gluster02:/srv/www/brick gluster03:/srv/www/brick gluster04:/srv/www/brick | gluster volume create www replica 3 transport tcp gluster01:/srv/www/brick gluster02:/srv/www/brick gluster03:/srv/www/brick }}} === Persistent Metadata === Enable consistent metadata for the volume. {{{ gluster volume set www cluster.consistent-metadata on gluster volume set home features.utime on |
Line 113: | Line 120: |
Line 125: | Line 133: |
Brick gluster04:/srv/www 49152 0 Y 1135 | |
Line 129: | Line 136: |
Self-heal Daemon on gluster04 N/A N/A Y 1155 | |
Line 135: | Line 141: |
== Encryption == Once all this works we will continue by adding TLS encryption to the setup. === Keys === On each of the Glusterfs servers run. {{{ mkdir /etc/ssl/glusterfs cd /etc/ssl/glusterfs openssl genrsa -out glusterfs.key 2048 }}} === Certificates === Now sign a certificate using that key. {{{ openssl req -new -x509 -key glusterfs.key -subj "/CN=gluster01" -out glusterfs.pem }}} |
GlusterFS Server
Network
The GlusterFS servers will have fixed IP-addresses. That is configured in the DHCP servers list of statically assigned IP-adresses by using the Domains MAC address.
- 192.168.1.42 gluster01
- 192.168.1.43 gluster02
- 192.168.1.44 gluster03
Disk
Add hardware
To keep things separated the OS will be on one disk and the glusterfs filesystem will be on another disk. We will add a disk to the domain. This disk will be used for the glusterfs file storage.
Configure disk
Partition
Create one large partition on /dev/vdb with partition type 8e Linux LVM.
LVM
Set up a new physical volume, configure a volume group and add a volume
pvcreate /dev/vdb1 vgcreate vg_data /dev/vdb1 lvcreate --size 4G --name gluster_www vg_data
Format
Create a filesystem on the volume.
mkfs.ext4 /dev/vg_data/gluster_www
Mountpoint
Create the mountpoint.
mkdir /srv/www
fstab
Add the volume to /etc/fstab.
/dev/vg_data/gluster_www /srv/www ext4 defaults 0 0
Mount
Mount the new volume.
mount /srv/www
If the mount command does not succeed, it it most likely because the fstab entry is incorrect.
Software
Mount Point
Glusterfs is not happy about using a directory which is also a mountpoint.
volume create: www: failed: The brick gluster01:/srv/www is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.
If for some reason the filesystem is not mounted, then glusterfs might misunderstand the situation and tell the other servers that the directory is now empty, and all files would be deleted on all servers. To avoid this a directory is created under the mount point.
mkdir /srv/www/brick
Install
Install the server.
apt-get install glusterfs-server
If you are running Buster, then make the server starts automatically and start it
systemctl enable glusterd.service service glusterd start
Introduce Servers
The glusterfs servers need to now each other. This can be done from any one server. Here we will do it from gluster01. We need to introduce the three other servers.
gluster peer probe gluster02 gluster peer probe gluster03
Server Status
Now check that the servers were properly probed.
gluster peer status
Number of Peers: 2 Hostname: gluster02 Uuid: 031573c2-3b1f-4946-bd78-421563249db6 State: Peer in Cluster (Connected) Hostname: gluster03 Uuid: ff5cec1c-6d7f-4db7-8676-08deff06b4d0 State: Peer in Cluster (Connected)
Create Volume
Create the gluster volume.
gluster volume create www replica 3 transport tcp gluster01:/srv/www/brick gluster02:/srv/www/brick gluster03:/srv/www/brick
Persistent Metadata
Enable consistent metadata for the volume.
gluster volume set www cluster.consistent-metadata on gluster volume set home features.utime on
Start Volume
Finally we can start the volume.
gluster volume start www
Volume Status
Check the status of the volume.
gluster volume status
Status of volume: www Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster01:/srv/www 49152 0 Y 826 Brick gluster02:/srv/www 49152 0 Y 1355 Brick gluster03:/srv/www 49152 0 Y 1034 Self-heal Daemon on localhost N/A N/A Y 846 Self-heal Daemon on gluster02 N/A N/A Y 1377 Self-heal Daemon on gluster03 N/A N/A Y 1054 Task Status of Volume www ------------------------------------------------------------------------------ There are no active volume tasks