2608
Comment:
|
4340
|
Deletions are marked like this. | Additions are marked like this. |
Line 18: | Line 18: |
We will add an additional disk to the system. This disk will be used for file storage. | === Add hardware === We will add an additional disk to the system. This disk will be used for file storage. [[Domain Editing|Domain Editing]]. === Configure disk === ==== Partition ==== Create one large partition on vdb with partition type `8e Linux LVM`. ==== LVM ==== Set up a new physical volume and volume group {{{ pvcreate /dev/vdb1 vgcreate vg2 /dev/vdb1 }}} ==== LVM Volume ==== {{{ lvcreate --size 4G --name gluster_www vg2 }}} ==== Mountpoint ==== Create the mountpoint. {{{ mkdir /srv/www }}} ==== Format ==== Format the volume. {{{ mkfs.btrfs /dev/vg2/gluster_www }}} ==== fstab ==== Add the volume to `/etc/fstab`. {{{ /dev/vg2/gluster_www /srv/www btrfs defaults 0 0 }}} ==== Mount ==== Mount the new volume. {{{ mount /srv/www }}} If the mount command does not succeed, it it most likely because the fstab entry is incorrect. |
Line 22: | Line 66: |
=== Mountpoint === Glusterfs is not happy about using a directory, which is also a mountpoint. If for some reason the filesystem is not mounted, then glusterfs might misunderstand the situation and tell the other servers that the directory is now empty, and all files should be deleted. To avoid this and extra directory is added. {{{ mkdir /srv/www/glusterfs }}} volume create: www: failed: The brick gluster01:/srv/www is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior. === Install === Install the server. |
|
Line 53: | Line 108: |
on the kvm host | create the gluster volume. the force option is needed because we are creating the volume on a mountpoint. {{{ |
Line 55: | Line 111: |
add a 16GiB harddisk for each of the servers | gluster volume create www replica 4 transport tcp gluster01:/srv/www/glusterfs gluster02:/srv/www/glusterfs gluster03:/srv/www/glusterfs gluster04:/srv/www/glusterfs }}} start the volume |
Line 57: | Line 117: |
lvcreate --size 16G --name kvm_gluster01_vdb vg2 lvcreate --size 16G --name kvm_gluster02_vdb vg2 lvcreate --size 16G --name kvm_gluster03_vdb vg2 lvcreate --size 16G --name kvm_gluster04_vdb vg2 |
root@gluster01:/srv/www# gluster volume start www volume start: www: success root@gluster01:/srv/www# gluster volume status Status of volume: www Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster01:/srv/www 49152 0 Y 826 Brick gluster02:/srv/www 49152 0 Y 1355 Brick gluster03:/srv/www 49152 0 Y 1034 Brick gluster04:/srv/www 49152 0 Y 1135 Self-heal Daemon on localhost N/A N/A Y 846 Self-heal Daemon on gluster02 N/A N/A Y 1377 Self-heal Daemon on gluster03 N/A N/A Y 1054 Self-heal Daemon on gluster04 N/A N/A Y 1155 Task Status of Volume www ------------------------------------------------------------------------------ There are no active volume tasks |
Line 62: | Line 137: |
hotadd those disks {{{ virsh attach-disk gluster01 --source /dev/vg2/kvm_gluster01_vdb --target vdb virsh attach-disk gluster02 --source /dev/vg2/kvm_gluster02_vdb --target vdb virsh attach-disk gluster03 --source /dev/vg2/kvm_gluster03_vdb --target vdb virsh attach-disk gluster04 --source /dev/vg2/kvm_gluster04_vdb --target vdb }}} in the guest create one large partition on vdb pvcreate /dev/vdb1 vgcreate vg2 /dev/vdb1 create the volumes lvcreate --size 4G --name gluster_www vg2 lvcreate --size 4G --name gluster_mail vg2 create the mountpoints mkdir /srv/mail mkdir /srv/www format the volumes mkfs.btrfs /dev/vg2/gluster_www mkfs.btrfs /dev/vg2/gluster_mail add the volumes to /etc/fstab /dev/vg2/gluster_www /srv/www btrfs defaults 0 0 /dev/vg2/gluster_mail /srv/mail btrfs defaults 0 0 |
GlusterFS
GlusterFS is a distributed filesystem. It has built in redundancy, so it is possible to run two servers which automatically replicate files between the servers. If one server goes down, the other just takes over. Once it comes up again, files are automatically replicated. Here we will use GlusterFS as a redundant network mounted file storage.
For the sport of it, we will configure 4 GlusterFS servers.
Network
The GlusterFS servers will have fixed IP-addresses. That is configured in the DHCP servers list of statically assigned IP-adresses by using the Domains MAC address.
- 192.168.1.40 gluster01
- 192.168.1.41 gluster02
- 192.168.1.42 gluster03
- 192.168.1.43 gluster04
Disk
Add hardware
We will add an additional disk to the system. This disk will be used for file storage. Domain Editing.
Configure disk
Partition
Create one large partition on vdb with partition type 8e Linux LVM.
LVM
Set up a new physical volume and volume group
pvcreate /dev/vdb1 vgcreate vg2 /dev/vdb1
LVM Volume
lvcreate --size 4G --name gluster_www vg2
Mountpoint
Create the mountpoint.
mkdir /srv/www
Format
Format the volume.
mkfs.btrfs /dev/vg2/gluster_www
fstab
Add the volume to /etc/fstab.
/dev/vg2/gluster_www /srv/www btrfs defaults 0 0
Mount
Mount the new volume.
mount /srv/www
If the mount command does not succeed, it it most likely because the fstab entry is incorrect.
Software
Mountpoint
Glusterfs is not happy about using a directory, which is also a mountpoint. If for some reason the filesystem is not mounted, then glusterfs might misunderstand the situation and tell the other servers that the directory is now empty, and all files should be deleted. To avoid this and extra directory is added.
mkdir /srv/www/glusterfs
volume create: www: failed: The brick gluster01:/srv/www is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.
Install
Install the server.
apt-get install glusterfs-server
from gluster01 connect with the other gluster servers
gluster peer probe gluster02 gluster peer probe gluster03 gluster peer probe gluster04
check server status
gluster peer status Number of Peers: 3 Hostname: gluster02 Uuid: 031573c2-3b1f-4946-bd78-421563249db6 State: Peer in Cluster (Connected) Hostname: gluster03 Uuid: ff5cec1c-6d7f-4db7-8676-08deff06b4d0 State: Peer in Cluster (Connected) Hostname: gluster04 Uuid: 65fc398a-52e9-4292-9bbb-884becfbf5d6 State: Peer in Cluster (Connected)
create the gluster volume. the force option is needed because we are creating the volume on a mountpoint.
gluster volume create www replica 4 transport tcp gluster01:/srv/www/glusterfs gluster02:/srv/www/glusterfs gluster03:/srv/www/glusterfs gluster04:/srv/www/glusterfs
start the volume
root@gluster01:/srv/www# gluster volume start www volume start: www: success root@gluster01:/srv/www# gluster volume status Status of volume: www Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster01:/srv/www 49152 0 Y 826 Brick gluster02:/srv/www 49152 0 Y 1355 Brick gluster03:/srv/www 49152 0 Y 1034 Brick gluster04:/srv/www 49152 0 Y 1135 Self-heal Daemon on localhost N/A N/A Y 846 Self-heal Daemon on gluster02 N/A N/A Y 1377 Self-heal Daemon on gluster03 N/A N/A Y 1054 Self-heal Daemon on gluster04 N/A N/A Y 1155 Task Status of Volume www ------------------------------------------------------------------------------ There are no active volume tasks