Differences between revisions 28 and 46 (spanning 18 versions)
Revision 28 as of 2017-12-21 21:04:27
Size: 4321
Comment:
Revision 46 as of 2017-12-21 22:25:41
Size: 4716
Comment:
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:

Configure the [[GlusterFS Server|server]], then the [[GlusterFS Client|client]].
Line 19: Line 21:
We will add an additional disk to the system. This disk will be used for file storage. [[Domain Editing|Domain Editing]]. To keep things separated the OS will be on one disk and the glusterfs filesystem will be on another disk. We will [[Domain Editing|add a disk]] to the domain. This disk will be used for the glusterfs file storage.
Line 24: Line 26:

Create one large partition on vdb with partition type `8e Linux LVM`.
Create one large partition on `/dev/vdb` with partition type `8e Linux LVM`.
Line 28: Line 29:
Set up a new physical volume and volume group Set up a new physical volume, configure a volume group and add a volume
Line 32: Line 33:
lvcreate --size 4G --name gluster_www vg2
Line 34: Line 36:
==== LVM Volume ==== ==== Format ====
Create a filesystem on the volume.
Line 36: Line 39:
lvcreate --size 4G --name gluster_www vg2 mkfs.btrfs /dev/vg2/gluster_www
Line 43: Line 46:
}}}

==== Format ====
Format the volume.
{{{
mkfs.btrfs /dev/vg2/gluster_www
Line 66: Line 63:
=== Mountpoint ===
Glusterfs is not happy about using a directory, which is also a mountpoint. If for some reason the filesystem is not mounted, then glusterfs might misunderstand the situation and tell the other servers that the directory is now empty, and all files should be deleted. To avoid this and extra directory is added.
=== Mount Point ===
Glusterfs is not happy about using a directory which is also a mountpoint.

''volume create: www: failed: The brick gluster01:/srv/www is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.''

If for some reason the filesystem is not mounted, then glusterfs might misunderstand the situation and tell the other servers that the directory is now empty, and all files would be deleted on all servers. To avoid this a directory is created under the mount point.
Line 69: Line 70:
mkdir /srv/www/glusterfs mkdir /srv/www/brick
Line 71: Line 72:


volume create: www: failed: The brick gluster01:/srv/www is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.
Line 81: Line 79:
from gluster01
connect with the other gluster servers
=== Introduce Servers ===
The glusterfs servers need to now each other. This can be done from any one server. Here we will do it from gluster01. We need to introduce the three other servers.
Line 89: Line 87:
check server status
=== Server Status ===
Now check that the servers were properly probed.
Line 93: Line 91:
}}}
{{{
Line 108: Line 108:
create the gluster volume. the force option is needed because we are creating the volume on a mountpoint. === Create Volume ===
Create the gluster volume.
Line 110: Line 111:
 gluster volume create www replica 4 transport tcp gluster01:/srv/www/brick gluster02:/srv/www/brick gluster03:/srv/www/brick gluster04:/srv/www/brick gluster volume create www replica 4 transport tcp gluster01:/srv/www/brick gluster02:/srv/www/brick gluster03:/srv/www/brick gluster04:/srv/www/brick
Line 112: Line 113:
start the volume
Line 114: Line 114:
=== Start Volume ===
Finally we can start the volume.
Line 115: Line 117:

root@gluster01:/srv/www# gluster volume start www
volume start: www: success
root@gluster01:/srv/www# gluster volume status
gluster volume start www
}}}
=== Volume Status ===
Check the status of the volume.
{{{
gluster volume status
}}}
{{{

GlusterFS

Configure the server, then the client.

GlusterFS is a distributed filesystem. It has built in redundancy, so it is possible to run two servers which automatically replicate files between the servers. If one server goes down, the other just takes over. Once it comes up again, files are automatically replicated. Here we will use GlusterFS as a redundant network mounted file storage.

For the sport of it, we will configure 4 GlusterFS servers.

Network

The GlusterFS servers will have fixed IP-addresses. That is configured in the DHCP servers list of statically assigned IP-adresses by using the Domains MAC address.

  • 192.168.1.40 gluster01
  • 192.168.1.41 gluster02
  • 192.168.1.42 gluster03
  • 192.168.1.43 gluster04

Disk

Add hardware

To keep things separated the OS will be on one disk and the glusterfs filesystem will be on another disk. We will add a disk to the domain. This disk will be used for the glusterfs file storage.

Configure disk

Partition

Create one large partition on /dev/vdb with partition type 8e Linux LVM.

LVM

Set up a new physical volume, configure a volume group and add a volume

pvcreate /dev/vdb1
vgcreate vg2 /dev/vdb1
lvcreate --size 4G --name gluster_www vg2

Format

Create a filesystem on the volume.

mkfs.btrfs /dev/vg2/gluster_www

Mountpoint

Create the mountpoint.

mkdir /srv/www

fstab

Add the volume to /etc/fstab.

/dev/vg2/gluster_www /srv/www               btrfs   defaults        0       0

Mount

Mount the new volume.

mount /srv/www

If the mount command does not succeed, it it most likely because the fstab entry is incorrect.

Software

Mount Point

Glusterfs is not happy about using a directory which is also a mountpoint.

volume create: www: failed: The brick gluster01:/srv/www is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.

If for some reason the filesystem is not mounted, then glusterfs might misunderstand the situation and tell the other servers that the directory is now empty, and all files would be deleted on all servers. To avoid this a directory is created under the mount point.

mkdir /srv/www/brick

Install

Install the server.

apt-get install glusterfs-server

Introduce Servers

The glusterfs servers need to now each other. This can be done from any one server. Here we will do it from gluster01. We need to introduce the three other servers.

gluster peer probe gluster02
gluster peer probe gluster03
gluster peer probe gluster04

Server Status

Now check that the servers were properly probed.

gluster peer status

Number of Peers: 3

Hostname: gluster02
Uuid: 031573c2-3b1f-4946-bd78-421563249db6
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: ff5cec1c-6d7f-4db7-8676-08deff06b4d0
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 65fc398a-52e9-4292-9bbb-884becfbf5d6
State: Peer in Cluster (Connected)

Create Volume

Create the gluster volume.

gluster volume create www replica 4 transport tcp gluster01:/srv/www/brick gluster02:/srv/www/brick gluster03:/srv/www/brick gluster04:/srv/www/brick

Start Volume

Finally we can start the volume.

gluster volume start www

Volume Status

Check the status of the volume.

gluster volume status

Status of volume: www
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01:/srv/www                    49152     0          Y       826  
Brick gluster02:/srv/www                    49152     0          Y       1355 
Brick gluster03:/srv/www                    49152     0          Y       1034 
Brick gluster04:/srv/www                    49152     0          Y       1135 
Self-heal Daemon on localhost               N/A       N/A        Y       846  
Self-heal Daemon on gluster02               N/A       N/A        Y       1377 
Self-heal Daemon on gluster03               N/A       N/A        Y       1054 
Self-heal Daemon on gluster04               N/A       N/A        Y       1155 
 
Task Status of Volume www
------------------------------------------------------------------------------
There are no active volume tasks

None: GlusterFS (last edited 2021-03-26 21:17:29 by Kristian Kallenberg)