I get an Error when adding swift service to storage node

asked 2018-01-11 19:40:50 -0600

jcorkey gravatar image

updated 2018-01-11 20:47:05 -0600

I deployed open stack using kolla 5.0.0. I have one control node, two compute nodes and one storage node.

the multinode used to be configured where all services except compute services are running on the control node. But now I am trying to deploy so that glance services, cinder services, and swift services will all run on my storage node instead of the control.

I have configured the storage node with all the dependencies that kolla needs according to the documentation.

When I try to run kolla-ansible prechecks -i multinode I get this error below

TASK [swift : Checking free port for Swift Account Server] *********************
fatal: [openstackstorage.glv.internal]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute u'ansible_em1'\n\nThe error appears to have been in '/usr/share/kolla-ansible/ansible/roles/swift/tasks/precheck.yml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Checking free port for Swift Account Server\n  ^ here\n"}

Its saying something about dict object' has no attribute u'ansible_em1. not sure what this is trying to tell me.

em1 is the name of the all the NIC's on the control and compute nodes but my storage node's NICs name is enp5s0 Could the fact that my storage node's NIC is called enp5s0 and not em1 becausing issues?

Below is some info about my configurations for multimode file and globals.yml

global.yml

##############################
# Neutron - Networking Options
##############################
# This interface is what all your api services will be bound to by default.
# Additionally, all vxlan/tunnel and storage network traffic will go over this
# interface by default. This interface must contain an IPv4 address.
# It is possible for hosts to have non-matching names of interfaces - these can
# be set in an inventory file per host or per group or stored separately, see
#     http://docs.ansible.com/ansible/intro_inventory.html
# Yet another way to workaround the naming problem is to create a bond for the
# interface on all hosts and give the bond name here. Similar strategy can be
# followed for other types of interfaces.
network_interface: "em1"

# These can be adjusted for even more customization. The default is the same as
# the 'network_interface'. These interfaces must contain an IPv4 address.
#kolla_external_vip_interface: "{{ network_interface }}"
#api_interface: "{{ network_interface }}"
#storage_interface: "{{ network_interface }}"
#cluster_interface: "{{ network_interface }}"
#tunnel_interface: "{{ network_interface }}"
#dns_interface: "{{ network_interface }}"

# This is the raw interface given to neutron as its external network port. Even
# though an IP address can exist on this interface, it will be unusable in most
# configurations. It is recommended this interface not be configured with any IP
# addresses for that reason.
neutron_external_interface: "em2"

# Valid options are [ openvswitch, linuxbridge, vmware_nsxv, vmware_dvs, opendaylight ]
#neutron_plugin_agent: "openvswitch"

multinode file

[control]
# These hostname must be resolvable from your deployment host
openstackcontroller.glv.internal



# The above can also be specified as follows:
#control[01 ...
(more)
edit retag flag offensive close merge delete