Service MySQL start fail on galera 2nd node
Hi Experts,
I am using 2 nodes for Galera clustering on ubuntu 12.04 and receiving the following output on node 1:
mysql> SHOW STATUS LIKE 'wsrep%'
-> ;
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid | 7dbb763b-0bea-11e3-0800-36c52d70d461 |
| wsrep_protocol_version | 4 |
| wsrep_last_committed | 1 |
| wsrep_replicated | 1 |
| wsrep_replicated_bytes | 203 |
| wsrep_received | 2 |
| wsrep_received_bytes | 133 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_avg | 0.500000 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_cert_deps_distance | 1.000000 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced (6) |
| wsrep_cert_index_size | 2 |
| wsrep_causal_reads | 0 |
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 7dbb763b-0bea-11e3-0800-36c52d70d461 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 23.2.1(r129) |
| wsrep_ready | ON |
+----------------------------+--------------------------------------+
39 rows in set (0.00 sec)
On 2nd node: The Service MySQL fails to start - in /var/log/syslog
Aug 23 17:26:23 control2 mysqld: 130823 17:26:23 [Note] WSREP: GMCast version 0
Aug 23 17:26:23 control2 mysqld: 130823 17:26:23 [Note] WSREP: (c008d869-0c10-11e3-0800-146570776113, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
Aug 23 17:26:23 control2 mysqld: 130823 17:26:23 [Note] WSREP: (c008d869-0c10-11e3-0800-146570776113, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
Aug 23 17:26:23 control2 mysqld: 130823 17:26:23 [Note] WSREP: EVS version 0
Aug 23 17:26:23 control2 mysqld: 130823 17:26:23 [Note] WSREP: PC version 0
Aug 23 17:26:23 control2 mysqld: 130823 17:26:23 [Note] WSREP: gcomm: connecting to group 'controller_cluster', peer '10.0.0.145:'
Aug 23 17:26:23 control2 mysqld: 130823 17:26:23 [Note] WSREP: declaring 130214d4-0c0e-11e3-0800-27da27176580 stable
Aug 23 17:26:23 control2 mysqld: 130823 17:26:23 [Note] WSREP: view(view_id(PRIM,130214d4-0c0e-11e3-0800-27da27176580,6) memb {
Aug 23 17:26:23 control2 mysqld: #011130214d4-0c0e-11e3-0800-27da27176580,
Aug 23 17:26:23 control2 mysqld: #011c008d869-0c10-11e3-0800-146570776113,
Aug 23 17:26:23 control2 mysqld: } joined {
Aug 23 17:26:23 control2 mysqld: } left {
Aug 23 17:26:23 control2 mysqld: } partitioned {
Aug 23 17:26:23 control2 mysqld: })
Aug 23 17:26:24 control2 mysqld: 130823 17:26:24 [Note] WSREP: gcomm: connected
Aug 23 17:26:24 control2 mysqld: 130823 17:26:24 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
Aug 23 17:26:24 control2 mysqld: 130823 17:26:24 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
Aug 23 17:26:24 control2 mysqld: 130823 17:26:24 [Note] WSREP: Opened channel 'controller_cluster'
Aug 23 17:26:24 control2 mysqld: 130823 17:26:24 [Note] WSREP: Waiting for SST to complete.
Aug 23 17:26:24 control2 mysqld: 130823 17:26:24 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
Aug 23 17:26:24 control2 mysqld: 130823 17:26:24 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
Aug 23 17:26:24 control2 mysqld: 130823 17:26:24 [Note] WSREP: STATE EXCHANGE: sent state msg: c059b4f5-0c10-11e3-0800-90932b673ef2 ...