Differences between revisions 17 and 44 (spanning 27 versions)
Revision 17 as of 2017-12-21 11:14:16
Size: 4099
Comment:
Revision 44 as of 2017-12-21 21:35:03
Size: 4616
Comment:
Deletions are marked like this. Additions are marked like this.
Line 18: Line 18:
We will add an additional disk to the system. This disk will be used for file storage.
on the kvm host
=== Add hardware ===
To keep things separated the OS will be on one disk and the glusterfs filesystem will be on another disk. We will [[Domain Editing|add a disk]] to the domain. This disk will be used for the glusterfs file storage.
Line 21: Line 21:
add a 16GiB harddisk for each of the servers
{{{
lvcreate --size 16G --name kvm_gluster01_vdb vg2
lvcreate --size 16G --name kvm_gluster02_vdb vg2
lvcreate --size 16G --name kvm_gluster03_vdb vg2
lvcreate --size 16G --name kvm_gluster04_vdb vg2
}}}
=== Configure disk ===
Line 29: Line 23:
hotadd those disks
{{{
virsh attach-disk gluster01 --source /dev/vg2/kvm_gluster01_vdb --target vdb
virsh attach-disk gluster02 --source /dev/vg2/kvm_gluster02_vdb --target vdb
virsh attach-disk gluster03 --source /dev/vg2/kvm_gluster03_vdb --target vdb
virsh attach-disk gluster04 --source /dev/vg2/kvm_gluster04_vdb --target vdb
}}}
==== Partition ====
Create one large partition on `/dev/vdb` with partition type `8e Linux LVM`.
Line 37: Line 26:
in the guest

create one large partition on vdb with partition type linux lvm

set up a new physical volume and volume group
==== LVM ====
Set up a new physical volume, configure a volume group and add a volume
Line 45: Line 31:
lvcreate --size 4G --name gluster_www vg2
Line 47: Line 34:
create the volumes ==== Format ====
Create a filesystem on the volume.
Line 49: Line 37:
lvcreate --size 4G --name gluster_www vg2
lvcreate --size 4G --name gluster_mail vg2
mkfs.btrfs /dev/vg2/gluster_www
Line 52: Line 39:
create the mountpoints
==== Mountpoint ====
C
reate the mountpoint.
Line 54: Line 43:
mkdir /srv/mail
Line 57: Line 45:
format the volumes
{{{
mkfs.btrfs /dev/vg2/gluster_www
mkfs.btrfs /dev/vg2/gluster_mail
}}}
add the volumes to /etc/fstab

==== fstab ====
Add the volume to `/etc/fstab`.
Line 65: Line 50:
/dev/vg2/gluster_mail /srv/mail btrfs defaults 0 0
Line 68: Line 52:
==== Mount ====
Mount the new volume.
{{{
mount /srv/www
}}}
If the mount command does not succeed, it it most likely because the fstab entry is incorrect.
Line 71: Line 61:
=== Mount Point ===
Glusterfs is not happy about using a directory which is also a mountpoint.

''volume create: www: failed: The brick gluster01:/srv/www is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.''

If for some reason the filesystem is not mounted, then glusterfs might misunderstand the situation and tell the other servers that the directory is now empty, and all files would be deleted on all servers. To avoid this a directory is created under the mount point.
{{{
mkdir /srv/www/brick
}}}

=== Install ===
Install the server.
Line 75: Line 77:
from gluster01
connect with the other gluster servers
=== Introduce Servers ===
The glusterfs servers need to now each other. This can be done from any one server. Here we will do it from gluster01. We need to introduce the three other servers.
Line 83: Line 85:
check server status
=== Server Status ===
Now check that the servers were properly probed.
Line 86: Line 88:
gluster peer status # gluster peer status
Line 102: Line 104:
create the gluster volume. the force option is needed because we are creating the volume on a mountpoint. === Create Volume ===
Create the gluster volume.
Line 104: Line 107:
gluster volume create www replica 4 transport tcp gluster01:/srv/www gluster02:/srv/www gluster03:/srv/www gluster04:/srv/www force gluster volume create www replica 4 transport tcp gluster01:/srv/www/brick gluster02:/srv/www/brick gluster03:/srv/www/brick gluster04:/srv/www/brick
Line 106: Line 109:
start the volume
Line 108: Line 110:
=== Start Volume ===
Finally we can start the volume.
Line 109: Line 113:

root@gluster01:/srv/www# gluster volume start www
volume start: www: success
root@gluster01:/srv/www# gluster volume status
gluster volume start www
}}}
=== Volume Status ===
Check the status of the volume.
{{{
# gluster volume status

GlusterFS

GlusterFS is a distributed filesystem. It has built in redundancy, so it is possible to run two servers which automatically replicate files between the servers. If one server goes down, the other just takes over. Once it comes up again, files are automatically replicated. Here we will use GlusterFS as a redundant network mounted file storage.

For the sport of it, we will configure 4 GlusterFS servers.

Network

The GlusterFS servers will have fixed IP-addresses. That is configured in the DHCP servers list of statically assigned IP-adresses by using the Domains MAC address.

  • 192.168.1.40 gluster01
  • 192.168.1.41 gluster02
  • 192.168.1.42 gluster03
  • 192.168.1.43 gluster04

Disk

Add hardware

To keep things separated the OS will be on one disk and the glusterfs filesystem will be on another disk. We will add a disk to the domain. This disk will be used for the glusterfs file storage.

Configure disk

Partition

Create one large partition on /dev/vdb with partition type 8e Linux LVM.

LVM

Set up a new physical volume, configure a volume group and add a volume

pvcreate /dev/vdb1
vgcreate vg2 /dev/vdb1
lvcreate --size 4G --name gluster_www vg2

Format

Create a filesystem on the volume.

mkfs.btrfs /dev/vg2/gluster_www

Mountpoint

Create the mountpoint.

mkdir /srv/www

fstab

Add the volume to /etc/fstab.

/dev/vg2/gluster_www /srv/www               btrfs   defaults        0       0

Mount

Mount the new volume.

mount /srv/www

If the mount command does not succeed, it it most likely because the fstab entry is incorrect.

Software

Mount Point

Glusterfs is not happy about using a directory which is also a mountpoint.

volume create: www: failed: The brick gluster01:/srv/www is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.

If for some reason the filesystem is not mounted, then glusterfs might misunderstand the situation and tell the other servers that the directory is now empty, and all files would be deleted on all servers. To avoid this a directory is created under the mount point.

mkdir /srv/www/brick

Install

Install the server.

apt-get install glusterfs-server

Introduce Servers

The glusterfs servers need to now each other. This can be done from any one server. Here we will do it from gluster01. We need to introduce the three other servers.

gluster peer probe gluster02
gluster peer probe gluster03
gluster peer probe gluster04

Server Status

Now check that the servers were properly probed.

# gluster peer status
Number of Peers: 3

Hostname: gluster02
Uuid: 031573c2-3b1f-4946-bd78-421563249db6
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: ff5cec1c-6d7f-4db7-8676-08deff06b4d0
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 65fc398a-52e9-4292-9bbb-884becfbf5d6
State: Peer in Cluster (Connected)

Create Volume

Create the gluster volume.

gluster volume create www replica 4 transport tcp gluster01:/srv/www/brick gluster02:/srv/www/brick gluster03:/srv/www/brick gluster04:/srv/www/brick

Start Volume

Finally we can start the volume.

gluster volume start www

Volume Status

Check the status of the volume.

# gluster volume status
Status of volume: www
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01:/srv/www                    49152     0          Y       826  
Brick gluster02:/srv/www                    49152     0          Y       1355 
Brick gluster03:/srv/www                    49152     0          Y       1034 
Brick gluster04:/srv/www                    49152     0          Y       1135 
Self-heal Daemon on localhost               N/A       N/A        Y       846  
Self-heal Daemon on gluster02               N/A       N/A        Y       1377 
Self-heal Daemon on gluster03               N/A       N/A        Y       1054 
Self-heal Daemon on gluster04               N/A       N/A        Y       1155 
 
Task Status of Volume www
------------------------------------------------------------------------------
There are no active volume tasks

None: GlusterFS (last edited 2021-03-26 21:17:29 by Kristian Kallenberg)