Differences between revisions 16 and 17
Revision 16 as of 2017-12-20 01:15:08
Size: 4099
Comment:
Revision 17 as of 2017-12-21 11:14:16
Size: 4099
Comment:
Deletions are marked like this. Additions are marked like this.
Line 19: Line 19:

== Software ==

{{{
apt-get install glusterfs-server
}}}

from gluster01
connect with the other gluster servers
{{{
gluster peer probe gluster02
gluster peer probe gluster03
gluster peer probe gluster04
}}}

check server status

{{{
gluster peer status
Number of Peers: 3

Hostname: gluster02
Uuid: 031573c2-3b1f-4946-bd78-421563249db6
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: ff5cec1c-6d7f-4db7-8676-08deff06b4d0
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 65fc398a-52e9-4292-9bbb-884becfbf5d6
State: Peer in Cluster (Connected)
}}}
Line 102: Line 68:

== Software ==

{{{
apt-get install glusterfs-server
}}}

from gluster01
connect with the other gluster servers
{{{
gluster peer probe gluster02
gluster peer probe gluster03
gluster peer probe gluster04
}}}

check server status

{{{
gluster peer status
Number of Peers: 3

Hostname: gluster02
Uuid: 031573c2-3b1f-4946-bd78-421563249db6
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: ff5cec1c-6d7f-4db7-8676-08deff06b4d0
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 65fc398a-52e9-4292-9bbb-884becfbf5d6
State: Peer in Cluster (Connected)
}}}

GlusterFS

GlusterFS is a distributed filesystem. It has built in redundancy, so it is possible to run two servers which automatically replicate files between the servers. If one server goes down, the other just takes over. Once it comes up again, files are automatically replicated. Here we will use GlusterFS as a redundant network mounted file storage.

For the sport of it, we will configure 4 GlusterFS servers.

Network

The GlusterFS servers will have fixed IP-addresses. That is configured in the DHCP servers list of statically assigned IP-adresses by using the Domains MAC address.

  • 192.168.1.40 gluster01
  • 192.168.1.41 gluster02
  • 192.168.1.42 gluster03
  • 192.168.1.43 gluster04

Disk

We will add an additional disk to the system. This disk will be used for file storage. on the kvm host

add a 16GiB harddisk for each of the servers

lvcreate --size 16G --name kvm_gluster01_vdb vg2
lvcreate --size 16G --name kvm_gluster02_vdb vg2
lvcreate --size 16G --name kvm_gluster03_vdb vg2
lvcreate --size 16G --name kvm_gluster04_vdb vg2

hotadd those disks

virsh attach-disk gluster01 --source /dev/vg2/kvm_gluster01_vdb --target vdb
virsh attach-disk gluster02 --source /dev/vg2/kvm_gluster02_vdb --target vdb
virsh attach-disk gluster03 --source /dev/vg2/kvm_gluster03_vdb --target vdb
virsh attach-disk gluster04 --source /dev/vg2/kvm_gluster04_vdb --target vdb

in the guest

create one large partition on vdb with partition type linux lvm

set up a new physical volume and volume group

pvcreate /dev/vdb1
vgcreate vg2 /dev/vdb1

create the volumes

lvcreate --size 4G --name gluster_www vg2
lvcreate --size 4G --name gluster_mail vg2

create the mountpoints

mkdir /srv/mail
mkdir /srv/www

format the volumes

mkfs.btrfs /dev/vg2/gluster_www
mkfs.btrfs /dev/vg2/gluster_mail

add the volumes to /etc/fstab

/dev/vg2/gluster_www /srv/www               btrfs   defaults        0       0
/dev/vg2/gluster_mail /srv/mail               btrfs   defaults        0       0

Software

apt-get install glusterfs-server

from gluster01 connect with the other gluster servers

gluster peer probe gluster02
gluster peer probe gluster03
gluster peer probe gluster04

check server status

gluster peer status
Number of Peers: 3

Hostname: gluster02
Uuid: 031573c2-3b1f-4946-bd78-421563249db6
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: ff5cec1c-6d7f-4db7-8676-08deff06b4d0
State: Peer in Cluster (Connected)

Hostname: gluster04
Uuid: 65fc398a-52e9-4292-9bbb-884becfbf5d6
State: Peer in Cluster (Connected)

create the gluster volume. the force option is needed because we are creating the volume on a mountpoint.

gluster volume create www replica 4 transport tcp gluster01:/srv/www gluster02:/srv/www gluster03:/srv/www gluster04:/srv/www force

start the volume

root@gluster01:/srv/www# gluster volume start www
volume start: www: success
root@gluster01:/srv/www# gluster volume status
Status of volume: www
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01:/srv/www                    49152     0          Y       826  
Brick gluster02:/srv/www                    49152     0          Y       1355 
Brick gluster03:/srv/www                    49152     0          Y       1034 
Brick gluster04:/srv/www                    49152     0          Y       1135 
Self-heal Daemon on localhost               N/A       N/A        Y       846  
Self-heal Daemon on gluster02               N/A       N/A        Y       1377 
Self-heal Daemon on gluster03               N/A       N/A        Y       1054 
Self-heal Daemon on gluster04               N/A       N/A        Y       1155 
 
Task Status of Volume www
------------------------------------------------------------------------------
There are no active volume tasks

None: GlusterFS (last edited 2021-03-26 21:17:29 by Kristian Kallenberg)