GlusterFS : Step by step configuration

Installing GlusterFS 

Step 1 – Ready two nodes

  • Server 1 : 192.168.0.1 or server1
  • Server 2 : 192.168.0.2 or server2

Step 2 - Installing GlusterFS in both server

#sudo apt update && sudo apt install -y glusterfs-server   # Ubuntu/Debian
#sudo systemctl enable --now glusterd


Step 3 - Configure firewall - allow between host in both server

#sudo firewall-cmd --permanent --zone=public --add-source=192.168.0.2 
#sudo firewall-cmd --reload


Step 4 - Create GlusterFS Storage
- Create a partition on external harddisk(/dev/sdb) on both server
*can add other disk or partition
#sudo fdisk /dev/sdb

- Format the partition with 
#sudo mkfs.xfs /dev/sdb1 

- Create a directory for GlusterFS storage on both server
#sudo mkdir /mnt/glusterfs

- Make the mount persistent across reboot
#vi /etc/fstab
add - /dev/sdb1 /mnt/glusterfs xfs defaults 0 0
#mount -a
#df -h

...
/dev/sdb1     492G   25M  467G   1% /mnt/glusterfs
...
Step 5 - Configure GlusterFS Volume
- Create a trusted storage pool by adding server2 on server1
#gluster peer probe server2
...
peer probe: success
...
#gluster peer status
...
Number of Peers: 1

Hostname: server2
Uuid: 82786389-eed0-4413-8922-87b098a825a2
State: Peer in Cluster (Connected)
...
- List the storage pool
#gluster pool list
....
UUID                                    Hostname        State
d90658eb-a348-49f6-9926-b353dc90bacc    server2         Connected
616f53d8-793d-408d-b995-e94411dd1e34    localhost       Connected
....
- In server 2 can verify the peer
#sudo gluster peer probe server1
...
peer probe: success
...
#gluster peer status
...
Number of Peers: 1 

Hostname: server1
Uuid: 2fb113e7-791e-4528-818e-5566ccdb984c
State: Peer in Cluster (Connected)
...
- Create a brick directory on both nodes
#sudo mkdir /mnt/glusterfs/vol

- In server 1, create a volume named vol with two replicas
#sudo gluster volume create voldata replica 2 server1:/mnt/glusterdata/vol server2:/mnt/glusterdata/vol
...
volume create: data: success: please start the volume to access data
...
- Start the volume
#sudo gluster volume start voldata
...
volume start: data: success
...
- Check the status of the created volume 
#sudo gluster volume status
...
Status of volume: voldata
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server1:/mnt/glusterfs/vol
rage                                        51372     0          Y       4233
Brick server2:/mnt/glusterfs/vol
orage                                       50993     0          Y       4932
Self-heal Daemon on localhost               N/A       N/A        Y       4250
Self-heal Daemon on server2               N/A       N/A        Y       4949

Task Status of Volume moodleadata
------------------------------------------------------------------------------
There are no active volume tasks
...
- Check the info of the created volume
# gluster volume info
...
Volume Name: voldata
Type: Replicate
Volume ID: bb2bb2fe-1380-49fd-9bbd-551c497c8a33
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1:/mnt/glusterfs/vol
Brick2: server2:/mnt/glusterfs/vol
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
...
Step 7 - Mount the Volume on both server to /etc/fstab
#vi /etc/fstab
- add the line below in server 1:
server1:/voldata /home/<webpath> glusterfs defaults,_netdev 0 0
#mount -a

- add the line below in server 2:
server2:/voldata /home/<webpath> glusterfs defaults,_netdev 0 0
#mount -a

Step 8 - Test Replication
- Create a files on server1:
#touch /home/<webpath>/file1 

- Go to server2 will see same file that created on server1
#ls -l /home/<webpath>
...
total 0
-rw-r--r-- 1 root root 0 Nov  7 13:42 file1
...

 

source : https://www.howtoforge.com/how-to-install-and-configure-glusterfs-on-ubuntu/ 

Post a Comment

Previous Post Next Post