Ask Your Question
1

Glusterfs 3.4.2 data replication doesn't work on Fedora 20 two node cluster

asked 2014-03-05 11:01:52 -0500

dbaxps gravatar image

updated 2014-03-05 15:22:53 -0500

Two node glusterfs cluster :-

[root@dallas1 Data001(keystone_admin)]$ gluster peer status Number of Peers: 1

Hostname: dallas2.localdomain Uuid: b3b1cf43-2fec-4904-82d4-b9be03f77c5f State: Peer in Cluster (Connected)

[root@dallas1 ~(keystone_admin)]$ gluster volume info

Volume Name: cinder-volumes002
Type: Replicate
Volume ID: 732da540-2eef-4842-90d5-55a657bcf4e6
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: dallas1.localdomain:/RPL/Data001
Brick2: dallas2.localdomain:/RPL/Data001
Options Reconfigured:
auth.allow: 192.168.1.*

[root@dallas1 ~(keystone_admin)]$ cd /RPL/Data001
[root@dallas1 Data001(keystone_admin)]$ ls -l
total 636492
-rw-rw-rw-. 2 root root 7516192768 Mar  5 20:43 volume-b3fe6e53-de83-4eb5-be7b-eded741c98dc

[root@dallas1 ~]# ssh dallas2
Last login: Wed Mar  5 19:07:39 2014

[root@dallas2 Data001]# gluster peer status Number of Peers: 1

Hostname: 192.168.1.130 Uuid: a57433dd-4a1a-4442-a5ae-ba2f682e5c79 State: Peer in Cluster (Connected)

[root@dallas2 ~]# cd /RPL/Data001
[root@dallas2 Data001]# ls -la
total 16
drwxr-xr-x. 3 root root 4096 Mar  5 20:25 .
drwxr-xr-x. 3 root root 4096 Mar  5 20:25 ..
drw-------. 5 root root 4096 Mar  5 20:26 .glusterfs

[root@dallas2 Data001]# date
Wed Mar  5 20:53:24 MSK 2014

Another attempt :-

[root@dallas1 ~(keystone_admin)]$ service glusterd status -l
Redirecting to /bin/systemctl status  -l glusterd.service
glusterd.service - GlusterFS an clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Wed 2014-03-05 21:59:54 MSK; 3h 18min ago
  Process: 2580 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)
 Main PID: 2589 (glusterd)
   CGroup: /system.slice/glusterd.service
           ├─ 2589 /usr/sbin/glusterd -p /run/glusterd.pid
           ├─15412 /usr/sbin/glusterfsd -s dallas1.localdomain --volfile-id cinder-volumes012.dallas1.localdomain.FDR-Replicate -p /var/lib/glusterd/vols/cinder-volumes012/run/dallas1.localdomain-FDR-Replicate.pid -S /var/run/8ce78c26e525c50cc10b72362863e173.socket --brick-name /FDR/Replicate -l /var/log/glusterfs/bricks/FDR-Replicate.log --xlator-option *-posix.glusterd-uuid=a57433dd-4a1a-4442-a5ae-ba2f682e5c79 --brick-port 49155 --xlator-option cinder-volumes012-server.listen-port=49155
           ├─15424 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/2e81d8930636bcf11b9ff2c39a16bb8b.socket
           ├─15428 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/d76b52c3f5f530727ca59045ef42b023.socket --xlator-option *replicate*.node-uuid=a57433dd-4a1a-4442-a5ae-ba2f682e5c79
           └─15452 /sbin/rpc.statd

Mar 06 00:44:14 dallas1.localdomain systemd[1]: Started GlusterFS an clustered file-system server.
Mar 06 00:52:01 dallas1.localdomain rpc.statd[10223]: Version 1.2.9 starting
Mar 06 00:52:01 dallas1.localdomain sm-notify[10224]: Version 1.2.9 starting
Mar 06 01:18:20 dallas1.localdomain rpc.statd[15452]: Version 1.2.9 starting
Mar 06 01:18:20 dallas1.localdomain sm-notify[15453]: Version 1.2.9 starting

Volume Name: cinder-volumes012
Type: Replicate
Volume ID: 9ee31c6c-0ae3-4fee-9886-b9cb6a518f48
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: dallas1.localdomain:/FDR/Replicate
Brick2: dallas2.localdomain:/FDR/Replicate
Options Reconfigured:
auth.allow: 192.168.1.*
[root@dallas1 ~]# cd /FDR/Replicate
[root@dallas1 Replicate]# pwd
/FDR/Replicate
[root@dallas1 Replicate]# vi test
[root@dallas1 Replicate]# ls -l
total ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
0

answered 2014-03-06 05:57:48 -0500

dbaxps gravatar image

updated 2014-03-09 06:29:59 -0500

Glusterfs 3.4.2 data replication doesn't work for cinder backend in RDO Havana on Fedora 20 two node cluster However, final conclusion is here: Bugzilla record - Bug 1073071

Setting up /etc/sysconfig/iptables in Jan 2014 I was able to comment out lines in /etc/sysconfig/iptables

# -A FORWARD -j REJECT --reject-with icmp-host-prohibited
# -A INPUT -j REJECT --reject-with icmp-host-prohibited

That is what exactly Gluster 3.4.2 need.

In meantime doing install RDO Havana Controller from scratch it's a bit tricky to start RDO Havana Controller , having this lines commented out. However it's still possible.
Keeping this lines in /etc/sysconfig/iptables affects just simple :

#  mount -t  glusterfs 192.168.1.130:cinder-volumes  /mnt01

on another cluster peer ( status connected)

I want to thank Joe Julian for his patience and attention to my problems
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-03-05 11:01:52 -0500

Seen: 550 times

Last updated: Mar 09 '14