"No route to host" while playing setup-infrastructure.yml

asked 2019-01-09 07:13:13 -0500

iwzah gravatar image

updated 2019-01-13 05:59:44 -0500

EDIT: Meanwhile we added a second bond (2 further NICs) into all servers and used the config from the prod-ceph example. The playbook stops at the same position with the same result under CENTOS 7.6. No route to host http://internal_vip:8181/..... (internal_vip inside container mgmt cidr)

(BTW: we had to overwrite the libvirt version to 4.5.0 due to repo build failure)

ORIGINAL:

I try to deploy OS using Openstack Ansible (containerized). We have several servers with (currently) one Dual Port NICs 10GBit in an VLAN tagged environment. I have assigned 4 VLAN Tags for that project and configured 4 VLANs and the corresponding bridges br-mgmt, br-storage, br-vlan and br-vxlan.

All devices are up. I can access the internet.

The playbook always struggle on "No route to host" when accessing the VIP, wich is a member of the mgmt VLAN. If I add manually a corresponding iptables rule to 8181, I can access the resource. I know - dont touch iptables in OSA. It's only temporary and remove via reboot after taht check. The error must be in my configuration or understanding.

Is there any problem deploying OS via Ansible using one NIC (Bonded) inside a VLAN tagged environment? After many sleepless nights and reading I dont see the wood for the trees. Any help is appreciated!

edit retag flag offensive close merge delete