Differences between revisions 28 and 66 (spanning 38 versions)
Revision 28 as of 2017-12-21 21:04:27
Size: 4321
Comment:
Revision 66 as of 2021-03-26 21:17:29
Size: 764
Comment:
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
Line 5: Line 4:
For the sport of it, we will configure 4 GlusterFS servers. For the sport of it, we will configure 3 GlusterFS servers.
Line 7: Line 6:
== Network == Configure the [[GlusterFS Server|servers]], then the [[GlusterFS Client|client]]. Once completed continue by setting up [[GlusterFS Encryption|encryption]].
Line 9: Line 8:
The GlusterFS servers will have fixed IP-addresses. That is configured in the DHCP servers list of statically assigned IP-adresses by using the Domains MAC address.

 * 192.168.1.40 gluster01
 * 192.168.1.41 gluster02
 * 192.168.1.42 gluster03
 * 192.168.1.43 gluster04

== Disk ==

=== Add hardware ===
We will add an additional disk to the system. This disk will be used for file storage. [[Domain Editing|Domain Editing]].

=== Configure disk ===

==== Partition ====

Create one large partition on vdb with partition type `8e Linux LVM`.

==== LVM ====
Set up a new physical volume and volume group
{{{
pvcreate /dev/vdb1
vgcreate vg2 /dev/vdb1
}}}

==== LVM Volume ====
{{{
lvcreate --size 4G --name gluster_www vg2
}}}

==== Mountpoint ====
Create the mountpoint.
{{{
mkdir /srv/www
}}}

==== Format ====
Format the volume.
{{{
mkfs.btrfs /dev/vg2/gluster_www
}}}

==== fstab ====
Add the volume to `/etc/fstab`.
{{{
/dev/vg2/gluster_www /srv/www btrfs defaults 0 0
}}}

==== Mount ====
Mount the new volume.
{{{
mount /srv/www
}}}
If the mount command does not succeed, it it most likely because the fstab entry is incorrect.

== Software ==

=== Mountpoint ===
Glusterfs is not happy about using a directory, which is also a mountpoint. If for some reason the filesystem is not mounted, then glusterfs might misunderstand the situation and tell the other servers that the directory is now empty, and all files should be deleted. To avoid this and extra directory is added.
{{{
mkdir /srv/www/glusterfs
}}}


volume create: www: failed: The brick gluster01:/srv/www is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.

=== Install ===
Install the server.
{{{
apt-get install glusterfs-server
}}}

from gluster01
connect with the other gluster servers
{{{
gluster peer probe gluster02
gluster peer probe gluster03
gluster peer probe gluster04
}}}

check server status

{{{
gluster peer status
Number of Peers: 3

Hostname: gluster02
Uuid: 031573c2-3b1f-4946-bd78-421563249db6
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: ff5cec1c-6d7f-4db7-8676-08deff06b4d0
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 65fc398a-52e9-4292-9bbb-884becfbf5d6
State: Peer in Cluster (Connected)
}}}

create the gluster volume. the force option is needed because we are creating the volume on a mountpoint.
{{{
 gluster volume create www replica 4 transport tcp gluster01:/srv/www/brick gluster02:/srv/www/brick gluster03:/srv/www/brick gluster04:/srv/www/brick
}}}
start the volume

{{{

root@gluster01:/srv/www# gluster volume start www
volume start: www: success
root@gluster01:/srv/www# gluster volume status
Status of volume: www
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01:/srv/www 49152 0 Y 826
Brick gluster02:/srv/www 49152 0 Y 1355
Brick gluster03:/srv/www 49152 0 Y 1034
Brick gluster04:/srv/www 49152 0 Y 1135
Self-heal Daemon on localhost N/A N/A Y 846
Self-heal Daemon on gluster02 N/A N/A Y 1377
Self-heal Daemon on gluster03 N/A N/A Y 1054
Self-heal Daemon on gluster04 N/A N/A Y 1155
 
Task Status of Volume www
------------------------------------------------------------------------------
There are no active volume tasks
}}}
If you only want to run 2 GlusterFS servers then setup [[GlusterFS Server with Arbiter|servers with arbiter]] and [[GlusterFS Client with Arbiter|client with arbiter]] instead.

GlusterFS

GlusterFS is a distributed filesystem. It has built in redundancy, so it is possible to run two servers which automatically replicate files between the servers. If one server goes down, the other just takes over. Once it comes up again, files are automatically replicated. Here we will use GlusterFS as a redundant network mounted file storage.

For the sport of it, we will configure 3 GlusterFS servers.

Configure the servers, then the client. Once completed continue by setting up encryption.

If you only want to run 2 GlusterFS servers then setup servers with arbiter and client with arbiter instead.

None: GlusterFS (last edited 2021-03-26 21:17:29 by Kristian Kallenberg)