Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Packstack installation failed as neutron does not start

Hello Openstackers,

This is my maiden attempt to perform an Openstack Installation. I have googled and researched through Openstack neutron failures but none of the solutions have worked for me. As a last resort I place my faith in this network. Please help me resolve my issue.

Thanks in advance for your guidance and support.

========Overview========

Architecture

Physical Server - I Hypervisor - OVS 3.3.2

VM1- Oracle Linux 6 Update 6 - Openstack Controller [172.30.21.160]

VM2- Oracle Linux 6 Update 6 - Openstack Neutron [172.30.21.161]

Physical Server - II Hypervisor - OVS 3.3.2

VM1- Oracle Linux 6 Update 6 - Openstack Compute Node [172.30.21.160]

VM2- Oracle Linux 6 Update 6 - Openstack Compute Node [172.30.21.161]

I am using Packstack to install Openstack and I use the config file generator to do the same -

link text

Packstack Config File Generator

[general]

# Path to a Public key to install on servers. If a usable key has not
# been installed on the remote servers the user will be prompted for a
# password and this key will be installed so the password will not be
# required again
CONFIG_SSH_KEY=

# Set to 'y' if you would like Packstack to install MySQL
CONFIG_MYSQL_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack Image
# Service (Glance)
CONFIG_GLANCE_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack Block
# Storage (Cinder)
CONFIG_CINDER_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack Compute
# (Nova)
CONFIG_NOVA_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack
# Networking (Neutron)
CONFIG_NEUTRON_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack
# Dashboard (Horizon)
CONFIG_HORIZON_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack Object
# Storage (Swift)
CONFIG_SWIFT_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack
# Metering (Ceilometer)
CONFIG_CEILOMETER_INSTALL=y

# Set to 'y' if you would like Packstack to install OpenStack
# Orchestration (Heat)
CONFIG_HEAT_INSTALL=n

# Set to 'y' if you would like Packstack to install the OpenStack
# Client packages. An admin "rc" file will also be installed
CONFIG_CLIENT_INSTALL=y

# Comma separated list of NTP servers. Leave plain if Packstack
# should not install ntpd on instances.
CONFIG_NTP_SERVERS=172.30.0.60

# Set to 'y' if you would like Packstack to install Nagios to monitor
# OpenStack hosts
CONFIG_NAGIOS_INSTALL=n

# Comma separated list of servers to be excluded from installation in
# case you are running Packstack the second time with the same answer
# file and don't want Packstack to touch these servers. Leave plain if
# you don't need to exclude any server.
EXCLUDE_SERVERS=

# Set to 'y' if you want to run OpenStack services in debug mode.
# Otherwise set to 'n'.
CONFIG_DEBUG_MODE=n

# Set to 'y' if you want to use VMware vCenter as hypervisor and
# storageOtherwise set to 'n'.
CONFIG_VMWARE_BACKEND=n

# The IP address of the VMware vCenter server
CONFIG_VCENTER_HOST=

# The username to authenticate to VMware vCenter server
CONFIG_VCENTER_USER=

# The password to authenticate to VMware vCenter server
CONFIG_VCENTER_PASSWORD=

# The name of the vCenter cluster
CONFIG_VCENTER_CLUSTER_NAME=

# The IP address of the server on which to install MySQL
CONFIG_MYSQL_HOST=172.30.21.160

# Username for the MySQL admin user
CONFIG_MYSQL_USER=root

# Password for the MySQL admin user
CONFIG_MYSQL_PW=91c5f8cf5e8b4ddb

# Set the server for the AMQP service
CONFIG_AMQP_SERVER=rabbitmq

# The IP address of the server on which to install the AMQP service
CONFIG_AMQP_HOST=172.30.21.160

# Enable SSL for the AMQP service
CONFIG_AMQP_ENABLE_SSL=n

# Enable Authentication for the AMQP service
CONFIG_AMQP_ENABLE_AUTH=n

# The password for the NSS certificate database of the AMQP service
CONFIG_AMQP_NSS_CERTDB_PW=e0d03f0495de44658df3fd53e3e0945f

# The port in which the AMQP service listens to SSL connections
CONFIG_AMQP_SSL_PORT=5671

# The filename of the CA certificate that the AMQP service is going
# to use
CONFIG_AMQP_SSL_CACERT_FILE=/etc/pki/tls/cert.pem

# The filename of the certificate that the AMQP service is going to
# use
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem

# The filename of the private key that the AMQP service is going to
# use
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem

# Auto Generates self signed SSL certificate and key
CONFIG_AMQP_SSL_SELF_SIGNED=y

# User for amqp authentication
CONFIG_AMQP_AUTH_USER=amqp_user

# Password for user authentication
CONFIG_AMQP_AUTH_PASSWORD=df15707f0bb54efc

# The IP address of the server on which to install Keystone
CONFIG_KEYSTONE_HOST=172.30.21.160

# The password to use for the Keystone to access DB
CONFIG_KEYSTONE_DB_PW=407dce8e44754c7e

# The token to use for the Keystone service api
CONFIG_KEYSTONE_ADMIN_TOKEN=19e7cfcccef3453c8528cf0a57086869

# The password to use for the Keystone admin user
CONFIG_KEYSTONE_ADMIN_PW=a75934871527476c

# The password to use for the Keystone demo user
CONFIG_KEYSTONE_DEMO_PW=1d6fef1e3f524f2a

# Kestone token format. Use either UUID or PKI
CONFIG_KEYSTONE_TOKEN_FORMAT=PKI

# The IP address of the server on which to install Glance
CONFIG_GLANCE_HOST=172.30.21.160

# The password to use for the Glance to access DB
CONFIG_GLANCE_DB_PW=0c34a43fc0f146fd

# The password to use for the Glance to authenticate with Keystone
CONFIG_GLANCE_KS_PW=a717fd0a72bb47b2

# The IP address of the server on which to install Cinder
CONFIG_CINDER_HOST=172.30.21.160

# The password to use for the Cinder to access DB
CONFIG_CINDER_DB_PW=5844c731b7544a07

# The password to use for the Cinder to authenticate with Keystone
CONFIG_CINDER_KS_PW=d7b014595a6549ec

# The Cinder backend to use, valid options are: lvm, gluster, nfs,
# vmdk
CONFIG_CINDER_BACKEND=lvm

# Create Cinder's volumes group. This should only be done for testing
# on a proof-of-concept installation of Cinder.  This will create a
# file-backed volume group and is not suitable for production usage.
CONFIG_CINDER_VOLUMES_CREATE=y

# Cinder's volumes group size. Note that actual volume size will be
# extended with 3% more space for VG metadata.
CONFIG_CINDER_VOLUMES_SIZE=20G

# A single or comma separated list of gluster volume shares to mount,
# eg: ip-address:/vol-name, domain:/vol-name
CONFIG_CINDER_GLUSTER_MOUNTS=

# A single or comma seprated list of NFS exports to mount, eg: ip-
# address:/export-name
CONFIG_CINDER_NFS_MOUNTS=172.30.21.70:/export/Controller-Neutron-Filesystem

# The IP address of the server on which to install the Nova API
# service
CONFIG_NOVA_API_HOST=172.30.21.160

# The IP address of the server on which to install the Nova Cert
# service
CONFIG_NOVA_CERT_HOST=172.30.21.160

# The IP address of the server on which to install the Nova VNC proxy
CONFIG_NOVA_VNCPROXY_HOST=172.30.21.160

# A comma separated list of IP addresses on which to install the Nova
# Compute services
CONFIG_NOVA_COMPUTE_HOSTS=172.30.21.72,172.30.21.73

# The IP address of the server on which to install the Nova Conductor
# service
CONFIG_NOVA_CONDUCTOR_HOST=172.30.21.160

# The password to use for the Nova to access DB
CONFIG_NOVA_DB_PW=373789f138d64866

# The password to use for the Nova to authenticate with Keystone
CONFIG_NOVA_KS_PW=11c64138eb784f10

# The IP address of the server on which to install the Nova Scheduler
# service
CONFIG_NOVA_SCHED_HOST=172.30.21.160

# The overcommitment ratio for virtual to physical CPUs. Set to 1.0
# to disable CPU overcommitment
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

# The overcommitment ratio for virtual to physical RAM. Set to 1.0 to
# disable RAM overcommitment
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

# Private interface for Flat DHCP on the Nova compute servers
CONFIG_NOVA_COMPUTE_PRIVIF=eth1

# The list of IP addresses of the server on which to install the Nova
# Network service
CONFIG_NOVA_NETWORK_HOSTS=172.30.21.160

# Nova network manager
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

# Public interface on the Nova network server
CONFIG_NOVA_NETWORK_PUBIF=eth0

# Private interface for network manager on the Nova network server
CONFIG_NOVA_NETWORK_PRIVIF=eth1

# IP Range for network manager
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.0.0/16

# IP Range for Floating IP's
CONFIG_NOVA_NETWORK_FLOATRANGE=182.16.0.0/16

# Name of the default floating pool to which the specified floating
# ranges are added to
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova

# Automatically assign a floating IP to new instances
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

# First VLAN for private networks
CONFIG_NOVA_NETWORK_VLAN_START=100

# Number of networks to support
CONFIG_NOVA_NETWORK_NUMBER=3

# Number of addresses in each private subnet
CONFIG_NOVA_NETWORK_SIZE=255

# The IP addresses of the server on which to install the Neutron
# server
CONFIG_NEUTRON_SERVER_HOST=172.30.21.161

# The password to use for Neutron to authenticate with Keystone
CONFIG_NEUTRON_KS_PW=607ed64a0dd843b4

# The password to use for Neutron to access DB
CONFIG_NEUTRON_DB_PW=99bab38803c54341

# A comma separated list of IP addresses on which to install Neutron
# L3 agent
CONFIG_NEUTRON_L3_HOSTS=172.30.21.161

# The name of the bridge that the Neutron L3 agent will use for
# external traffic, or 'provider' if using provider networks
# CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L3_EXT_BRIDGE=eth0

# A comma separated list of IP addresses on which to install Neutron
# DHCP agent
CONFIG_NEUTRON_DHCP_HOSTS=172.30.21.161

# A comma separated list of IP addresses on which to install Neutron
# LBaaS agent
CONFIG_NEUTRON_LBAAS_HOSTS=

# The name of the L2 plugin to be used with Neutron
CONFIG_NEUTRON_L2_PLUGIN=openvswitch

# A comma separated list of IP addresses on which to install Neutron
# metadata agent
CONFIG_NEUTRON_METADATA_HOSTS=172.30.21.161

# A comma separated list of IP addresses on which to install Neutron
# metadata agent
CONFIG_NEUTRON_METADATA_PW=01ef98a3bdc44916

# A comma separated list of network type driver entrypoints to be
# loaded from the neutron.ml2.type_drivers namespace.
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=local

# A comma separated ordered list of network_types to allocate as
# tenant networks. The value 'local' is only useful for single-box
# testing but provides no connectivity between hosts.
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=local

# A comma separated ordered list of networking mechanism driver
# entrypoints to be loaded from the neutron.ml2.mechanism_drivers
# namespace.
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch

# A comma separated  list of physical_network names with which flat
# networks can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=

# A comma separated list of <physical_network>:<vlan_min>:<vlan_max>
# or <physical_network> specifying physical_network names usable for
# VLAN provider and tenant networks, as well as ranges of VLAN tags on
# each available for allocation to tenant networks.
CONFIG_NEUTRON_ML2_VLAN_RANGES=

# A comma separated list of <tun_min>:<tun_max> tuples enumerating
# ranges of GRE tunnel IDs that are available for tenant network
# allocation. Should be an array with tun_max +1 - tun_min > 1000000
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=

# Multicast group for VXLAN. If unset, disables VXLAN enable sending
# allocate broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode. Should be an
# Multicast IP (v4 or v6) address.
CONFIG_NEUTRON_ML2_VXLAN_GROUP=

# A comma separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network
# allocation. Min value is 0 and Max value is 16777215.
CONFIG_NEUTRON_ML2_VNI_RANGES=

# The name of the L2 agent to be used with Neutron
CONFIG_NEUTRON_L2_AGENT=openvswitch

# The type of network to allocate for tenant networks (eg. vlan,
# local)
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local

# A comma separated list of VLAN ranges for the Neutron linuxbridge
# plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999)
CONFIG_NEUTRON_LB_VLAN_RANGES=

# A comma separated list of interface mappings for the Neutron
# linuxbridge plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3
# :br-eth3)
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

# Type of network to allocate for tenant networks (eg. vlan, local,
# gre, vxlan)
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=local

# A comma separated list of VLAN ranges for the Neutron openvswitch
# plugin (eg. physnet1:1:4094,physnet2,physnet3:3000:3999)
CONFIG_NEUTRON_OVS_VLAN_RANGES=

# A comma separated list of bridge mappings for the Neutron
# openvswitch plugin (eg. physnet1:br-eth1,physnet2:br-eth2,physnet3
# :br-eth3)
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=

# A comma separated list of colon-separated OVS bridge:interface
# pairs. The interface will be added to the associated bridge.
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

# A comma separated list of tunnel ranges for the Neutron openvswitch
# plugin (eg. 1:1000)
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1:4094

# The interface for the OVS tunnel. Packstack will override the IP
# address used for tunnels on this hypervisor to the IP found on the
# specified interface. (eg. eth1)
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth0

# VXLAN UDP port
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

# The IP address of the server on which to install the OpenStack
# client packages. An admin "rc" file will also be installed
CONFIG_OSCLIENT_HOST=

# The IP address of the server on which to install Horizon
CONFIG_HORIZON_HOST=172.30.21.160

# To set up Horizon communication over https set this to "y"
CONFIG_HORIZON_SSL=n

# PEM encoded certificate to be used for ssl on the https server,
# leave blank if one should be generated, this certificate should not
# require a passphrase
CONFIG_SSL_CERT=

# Keyfile corresponding to the certificate if one was entered
CONFIG_SSL_KEY=

# PEM encoded CA certificates from which the certificate chain of the
# server certificate can be assembled.
CONFIG_SSL_CACHAIN=

# The IP address on which to install the Swift proxy service
# (currently only single proxy is supported)
CONFIG_SWIFT_PROXY_HOSTS=172.30.21.160

# The password to use for the Swift to authenticate with Keystone
CONFIG_SWIFT_KS_PW=287ac9b60d3a4f42

# A comma separated list of IP addresses on which to install the
# Swift Storage services, each entry should take the format
# <ipaddress>[/dev], for example 127.0.0.1/vdb will install /dev/vdb
# on 127.0.0.1 as a swift storage device(packstack does not create the
# filesystem, you must do this first), if /dev is omitted Packstack
# will create a loopback device for a test setup
CONFIG_SWIFT_STORAGE_HOSTS=172.30.21.160

# Number of swift storage zones, this number MUST be no bigger than
# the number of storage devices configured
CONFIG_SWIFT_STORAGE_ZONES=1

# Number of swift storage replicas, this number MUST be no bigger
# than the number of storage zones configured
CONFIG_SWIFT_STORAGE_REPLICAS=1

# FileSystem type for storage nodes
CONFIG_SWIFT_STORAGE_FSTYPE=ext4

# Shared secret for Swift
CONFIG_SWIFT_HASH=df37d0a6d45a4d04

# Size of the swift loopback file storage device
CONFIG_SWIFT_STORAGE_SIZE=2G

# Whether to provision for demo usage and testing. Note that
# provisioning is only supported for all-in-one installations.
CONFIG_PROVISION_DEMO=n

# Whether to configure tempest for testing. Note that provisioning is
# only supported for all-in-one installations.
CONFIG_PROVISION_TEMPEST=n

# The CIDR network address for the floating IP subnet
CONFIG_PROVISION_DEMO_FLOATRANGE=162.16.0.0/16

# The uri of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

# The revision of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

# Whether to configure the ovs external bridge in an all-in-one
# deployment
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n

# The IP address of the server on which to install Heat service
CONFIG_HEAT_HOST=172.30.21.160

# The password used by Heat user to authenticate against MySQL
CONFIG_HEAT_DB_PW=4439c293b20142fb

# The encryption key to use for authentication info in database
CONFIG_HEAT_AUTH_ENC_KEY=4f35da87cdb34072

# The password to use for the Heat to authenticate with Keystone
CONFIG_HEAT_KS_PW=8dc2b32ffc81460c

# Set to 'y' if you would like Packstack to install Heat CloudWatch
# API
CONFIG_HEAT_CLOUDWATCH_INSTALL=n

# Set to 'y' if you would like Packstack to install Heat
# CloudFormation API
CONFIG_HEAT_CFN_INSTALL=n

# The IP address of the server on which to install Heat CloudWatch
# API service
CONFIG_HEAT_CLOUDWATCH_HOST=172.30.21.160

# The IP address of the server on which to install Heat
# CloudFormation API service
CONFIG_HEAT_CFN_HOST=172.30.21.160

# The IP address of the server on which to install Ceilometer
CONFIG_CEILOMETER_HOST=172.30.21.160

# Secret key for signing metering messages.
CONFIG_CEILOMETER_SECRET=bee3e19079a04ca0

# The password to use for Ceilometer to authenticate with Keystone
CONFIG_CEILOMETER_KS_PW=bf1212080c5e44b6

# The IP address of the server on which to install mongodb
CONFIG_MONGODB_HOST=172.30.21.160

# The IP address of the server on which to install the Nagios server
CONFIG_NAGIOS_HOST=172.30.21.160

# The password of the nagiosadmin user on the Nagios server
CONFIG_NAGIOS_PW=ccd847765d614b86

# To subscribe each server to EPEL enter "y"
CONFIG_USE_EPEL=n

# A comma separated list of URLs to any additional yum repositories
# to install
CONFIG_REPO=

Neutron Config File

[root@nfv-openstack-neutron neutron]# cat neutron.conf                                                                                                                    
[DEFAULT]                                                                                                                                                                 
# Print more verbose output (set logging level to INFO instead of default WARNING level).                                                                                 
# verbose = True                                                                                                                                                          
verbose = True                                                                                                                                                            

# Print debugging output (set logging level to DEBUG instead of default WARNING level).
# debug = False                                                                        
debug = False                                                                          

# Where to store Neutron state files.  This directory must be writable by the
# user executing the agent.                                                  
# state_path = /var/lib/neutron                                              

# Where to store lock files
# lock_path = $state_path/lock

# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# log_date_format = %Y-%m-%d %H:%M:%S                           

# use_syslog                           -> syslog
# log_file and log_dir                 -> log_dir/log_file
# (not log_file) and log_dir           -> log_dir/{binary_name}.log
# use_stderr                           -> stderr                   
# (not user_stderr) and (not log_file) -> stdout                   
# publish_errors                       -> notification system      

# use_syslog = False
use_syslog = False  
# syslog_log_facility = LOG_USER

# use_stderr = False
# log_file =        
# log_dir =         
log_dir =/var/log/neutron

# publish_errors = False

# Address to bind the API server to
# bind_host = 0.0.0.0              
bind_host = 0.0.0.0                

# Port the bind the API server to
# bind_port = 9696               
bind_port = 9696                 

# Path to the extensions.  Note that this can be a colon-separated list of
# paths.  For example:                                                    
# api_extensions_path = extensions:/path/to/more/extensions:/even/more/extensions
# The __path__ of neutron.extensions is appended to this, so if your             
# extensions are in there you don't need to specify them here                    
# api_extensions_path =                                                          

# (StrOpt) Neutron core plugin entrypoint to be loaded from the
# neutron.core_plugins namespace. See setup.cfg for the entrypoint names of the
# plugins included in the neutron source distribution. For compatibility with  
# previous versions, the class name of a plugin can be specified instead of its
# entrypoint name.                                                             
#                                                                              
# core_plugin =                                                                
core_plugin =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 
# Example: core_plugin = ml2                                                   

# (ListOpt) List of service plugin entrypoints to be loaded from the
# neutron.service_plugins namespace. See setup.cfg for the entrypoint names of
# the plugins included in the neutron source distribution. For compatibility  
# with previous versions, the class name of a plugin can be specified instead 
# of its entrypoint name.                                                     
#                                                                             
# service_plugins =                                                           
# Example: service_plugins = router,firewall,lbaas,vpnaas,metering            

# Paste configuration file
# api_paste_config = /usr/share/neutron/api-paste.ini

# The strategy to be used for auth.
# Supported values are 'keystone'(default), 'noauth'.
# auth_strategy = noauth                             
auth_strategy = keystone                             

# Base MAC address. The first 3 octets will remain unchanged. If the
# 4h octet is not 00, it will also be used. The others will be      
# randomly generated.                                               
# 3 octet                                                           
# base_mac = fa:16:3e:00:00:00                                      
base_mac = fa:16:3e:00:00:00                                        
# 4 octet                                                           
# base_mac = fa:16:3e:4f:00:00                                      

# Maximum amount of retries to generate a unique MAC address
# mac_generation_retries = 16                               
mac_generation_retries = 16                                 

# DHCP Lease duration (in seconds)
# dhcp_lease_duration = 86400     
dhcp_lease_duration = 86400       

# Allow sending resource operation notification to DHCP agent
# dhcp_agent_notification = True                             
dhcp_agent_notification = True                               

# Enable or disable bulk create/update/delete operations
# allow_bulk = True                                     
allow_bulk = True                                       
# Enable or disable pagination                          
# allow_pagination = False                              
allow_pagination = False                                
# Enable or disable sorting                             
# allow_sorting = False                                 
allow_sorting = False                                   
# Enable or disable overlapping IPs for subnets         
# Attention: the following parameter MUST be set to False if Neutron is
# being used in conjunction with nova security groups                  
# allow_overlapping_ips = True                                         
allow_overlapping_ips = True                                           
# Ensure that configured gateway is on subnet                          
# force_gateway_on_subnet = True                                       


# RPC configuration options. Defined in rpc __init__
# The messaging module to use, defaults to kombu.   
# rpc_backend = neutron.openstack.common.rpc.impl_kombu
rpc_backend = neutron.openstack.common.rpc.impl_kombu  
# Size of RPC thread pool                              
# rpc_thread_pool_size = 64                            
# Size of RPC connection pool                          
# rpc_conn_pool_size = 30                              
# Seconds to wait for a response from call or multicall
# rpc_response_timeout = 60                            
# Seconds to wait before a cast expires (TTL). Only supported by impl_zmq.
# rpc_cast_timeout = 30                                                   
# Modules of exceptions that are permitted to be recreated                
# upon receiving exception data from an rpc call.                         
# allowed_rpc_exception_modules = neutron.openstack.common.exception, nova.exception
# AMQP exchange to connect to if using RabbitMQ or QPID                             
# control_exchange = neutron                                                        
control_exchange = neutron                                                          

# If passed, use a fake RabbitMQ provider
# fake_rabbit = False                    

# Configuration options if sending notifications via kombu rpc (these are
# the defaults)                                                          
# SSL version to use (valid only if SSL enabled)                         
# kombu_ssl_version =                                                    
# SSL key file (valid only if SSL enabled)                               
# kombu_ssl_keyfile =                                                    
# SSL cert file (valid only if SSL enabled)                              
# kombu_ssl_certfile =                                                   
# SSL certification authority file (valid only if SSL enabled)           
# kombu_ssl_ca_certs =                                                   
# IP address of the RabbitMQ installation                                
# rabbit_host = localhost                                                
rabbit_host = 172.30.21.160                                              
# Password of the RabbitMQ server                                        
# rabbit_password = guest                                                
rabbit_password = df15707f0bb54efc                                       
# Port where RabbitMQ server is running/listening                        
# rabbit_port = 5672                                                     
rabbit_port = 5672                                                       
# RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672)
# rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port'                   
# rabbit_hosts = localhost:5672                                              
rabbit_hosts = 172.30.21.160:5672                                            
# User ID used for RabbitMQ connections                                      
# rabbit_userid = guest                                                      
rabbit_userid = amqp_user                                                    
# Location of a virtual RabbitMQ installation.                               
# rabbit_virtual_host = /                                                    
rabbit_virtual_host = /                                                      
# Maximum retries with trying to connect to RabbitMQ                         
# (the default of 0 implies an infinite retry count)                         
# rabbit_max_retries = 0                                                     
# RabbitMQ connection retry interval                                         
# rabbit_retry_interval = 1                                                  
# Use HA queues in RabbitMQ (x-ha-policy: all). You need to                  
# wipe RabbitMQ database when changing this option. (boolean value)          
# rabbit_ha_queues = false                                                   
rabbit_ha_queues = False                                                     

# QPID
# rpc_backend=neutron.openstack.common.rpc.impl_qpid
# Qpid broker hostname                              
# qpid_hostname = localhost                         
# Qpid broker port                                  
# qpid_port = 5672                                  
# Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672)
# qpid_hosts is defaulted to '$qpid_hostname:$qpid_port'                 
# qpid_hosts = localhost:5672                                            
# Username for qpid connection                                           
# qpid_username = ''                                                     
# Password for qpid connection                                           
# qpid_password = ''                                                     
# Space separated list of SASL mechanisms to use for auth                
# qpid_sasl_mechanisms = ''                                              
# Seconds between connection keepalive heartbeats                        
# qpid_heartbeat = 60                                                    
# Transport to use, either 'tcp' or 'ssl'                                
# qpid_protocol = tcp                                                    
# Disable Nagle algorithm                                                
# qpid_tcp_nodelay = True                                                

# ZMQ
# rpc_backend=neutron.openstack.common.rpc.impl_zmq
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address.                  
# rpc_zmq_bind_address = *                                                    

# ============ Notification System Options =====================

# Notifications can be sent when network/subnet/port are created, updated or deleted.
# There are three methods of sending notifications: logging (via the                 
# log_file directive), rpc (via a message queue) and                                 
# noop (no notifications sent, the default)                                          

# Notification_driver can be defined multiple times
# Do nothing driver                                
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver                                                        
# notification_driver = neutron.openstack.common.notifier.log_notifier  
# RPC driver.                                                           
# notification_driver = neutron.openstack.common.notifier.rpc_notifier  

# default_notification_level is used to form actual topic name(s) or to set logging level
# default_notification_level = INFO                                                      

# default_publisher_id is a part of the notification payload
# host = myhost.com                                         
# default_publisher_id = $host                              

# Defined in rpc_notifier, can be comma separated values.
# The actual topic names will be %s.%(default_notification_level)s
# notification_topics = notifications                             

# Default maximum number of items returned in a single response,
# value == infinite and value < 0 means no max limit, and value must
# be greater than 0. If the number of items requested is greater than
# pagination_max_limit, server will just return pagination_max_limit 
# of number of items.                                                
# pagination_max_limit = -1                                          

# Maximum number of DNS nameservers per subnet
# max_dns_nameservers = 5                     

# Maximum number of host routes per subnet
# max_subnet_host_routes = 20             

# Maximum number of fixed ips per port
# max_fixed_ips_per_port = 5          

# =========== items for agent management extension =============
# Seconds to regard the agent as down; should be at least twice 
# report_interval, to be sure the agent is down for good        
# agent_down_time = 75                                          
agent_down_time = 75                                            
# ===========  end of items for agent management extension =====

# =========== items for agent scheduler extension =============
# Driver to use for scheduling network to DHCP agent           
# network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler
# Driver to use for scheduling router to a default L3 agent                        
# router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler   
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler     
# Driver to use for scheduling a loadbalancer pool to an lbaas agent               
# loadbalancer_pool_scheduler_driver = neutron.services.loadbalancer.agent_scheduler.ChanceScheduler

# Allow auto scheduling networks to DHCP agent. It will schedule non-hosted
# networks to first DHCP agent which sends get_active_networks message to  
# neutron server                                                           
# network_auto_schedule = True                                             

# Allow auto scheduling routers to L3 agent. It will schedule non-hosted
# routers to first L3 agent which sends sync_routers message to neutron server
# router_auto_schedule = True                                                 

# Number of DHCP agents scheduled to host a network. This enables redundant
# DHCP agents for configured networks.                                     
# dhcp_agents_per_network = 1                                              
dhcp_agents_per_network = 1                                                

# ===========  end of items for agent scheduler extension =====

# =========== WSGI parameters related to the API server ==============
# Number of separate worker processes to spawn.  The default, 0, runs the
# worker thread in the current process.  Greater than 0 launches that number of
# child processes as workers.  The parent process manages them.                
# api_workers = 0                                                              
api_workers = 0                                                                

# Number of separate RPC worker processes to spawn.  The default, 0, runs the
# worker thread in the current process.  Greater than 0 launches that number of
# child processes as RPC workers.  The parent process manages them.            
# This feature is experimental until issues are addressed and testing has been 
# enabled for various plugins for compatibility.                               
# rpc_workers = 0                                                              
rpc_workers = 0                                                                

# Sets the value of TCP_KEEPIDLE in seconds to use for each server socket when
# starting API server. Not supported on OS X.                                 
# tcp_keepidle = 600                                                          

# Number of seconds to keep retrying to listen
# retry_until_window = 30                     

# Number of backlog requests to configure the socket with.
# backlog = 4096                                          

# Max header line to accommodate large tokens
# max_header_line = 16384                    

# Enable SSL on the API server
# use_ssl = False             
use_ssl = False               

# Certificate file to use when starting API server securely
# ssl_cert_file = /path/to/certfile                        

# Private key file to use when starting API server securely
# ssl_key_file = /path/to/keyfile                          

# CA certificate file to use when starting API server securely to
# verify connecting clients. This is an optional parameter only required if
# API clients need to authenticate to the API server using SSL certificates
# signed by a trusted CA                                                   
# ssl_ca_file = /path/to/cafile                                            
# ======== end of WSGI parameters related to the API server ==========     


# ======== neutron nova interactions ==========
# Send notification to nova when port status is active.
# notify_nova_on_port_status_changes = False           
notify_nova_on_port_status_changes = True              

# Send notifications to nova when port data (fixed_ips/floatingips) change
# so nova can update it's cache.                                          
# notify_nova_on_port_data_changes = False                                
notify_nova_on_port_data_changes = True                                   

# URL for connection to nova (Only supports one nova region currently).
# nova_url = http://127.0.0.1:8774/v2                                  
nova_url = http://172.30.21.160:8774/v2                                

# Name of nova region to use. Useful if keystone manages more than one region
# nova_region_name =                                                         
nova_region_name =RegionOne                                                  

# Username for connection to nova in admin context
# nova_admin_username =                           
nova_admin_username =nova                         

# The uuid of the admin nova tenant
# nova_admin_tenant_id =           
nova_admin_tenant_id =3670c4e9d05f4c309d6da9b3ab50f4de

# Password for connection to nova in admin context.
# nova_admin_password =                            
nova_admin_password =11c64138eb784f10              

# Authorization URL for connection to nova in admin context.
# nova_admin_auth_url =                                     
nova_admin_auth_url =http://172.30.21.160:35357/v2.0        

# Number of seconds between sending events to nova if there are any events to send
# send_events_interval = 2                                                        
send_events_interval = 2                                                          

# ======== end of neutron nova interactions ==========
kombu_reconnect_delay=1.0                             
rabbit_use_ssl=False                                  
router_distributed=False                              

[quotas]
# Default driver to use for quota checks
# quota_driver = neutron.db.quota_db.DbQuotaDriver

# Resource name(s) that are supported in quota features
# quota_items = network,subnet,port                    

# Default number of resource allowed per tenant. A negative value means
# unlimited.                                                           
# default_quota = -1                                                   

# Number of networks allowed per tenant. A negative value means unlimited.
# quota_network = 10                                                      

# Number of subnets allowed per tenant. A negative value means unlimited.
# quota_subnet = 10                                                      

# Number of ports allowed per tenant. A negative value means unlimited.
# quota_port = 50                                                      

# Number of security groups allowed per tenant. A negative value means
# unlimited.                                                          
# quota_security_group = 10                                           

# Number of security group rules allowed per tenant. A negative value means
# unlimited.                                                               
# quota_security_group_rule = 100                                          

# Number of vips allowed per tenant. A negative value means unlimited.
# quota_vip = 10                                                      

# Number of pools allowed per tenant. A negative value means unlimited.
# quota_pool = 10                                                      

# Number of pool members allowed per tenant. A negative value means unlimited.
# The default is unlimited because a member is not a real resource consumer   
# on Openstack. However, on back-end, a member is a resource consumer         
# and that is the reason why quota is possible.                               
# quota_member = -1                                                           

# Number of health monitors allowed per tenant. A negative value means
# unlimited.                                                          
# The default is unlimited because a health monitor is not a real resource
# consumer on Openstack. However, on back-end, a member is a resource consumer
# and that is the reason why quota is possible.                               
# quota_health_monitors = -1                                                  

# Number of routers allowed per tenant. A negative value means unlimited.
# quota_router = 10                                                      

# Number of floating IPs allowed per tenant. A negative value means unlimited.
# quota_floatingip = 50                                                       

[agent]
# Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" to use the real
# root filter facility.                                                 
# Change to "sudo" to skip the filtering and just run the comand directly
# root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf         
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf           

# =========== items for agent management extension =============
# seconds between nodes reporting state to server; should be less than
# agent_down_time, best if it is half or less than agent_down_time    
# report_interval = 30                                                
report_interval = 30                                                  

# ===========  end of items for agent management extension =====

[keystone_authtoken]
# auth_host = 127.0.0.1
auth_host = 172.30.21.160
# auth_port = 35357      
auth_port = 35357        
# auth_protocol = http   
auth_protocol = http     
# admin_tenant_name = %SERVICE_TENANT_NAME%
admin_tenant_name = services               
# admin_user = %SERVICE_USER%              
admin_user = neutron                       
# admin_password = %SERVICE_PASSWORD%      
admin_password = 607ed64a0dd843b4          
auth_uri=http://172.30.21.160:5000/        

[database]
# This line MUST be changed to actually run the plugin.
# Example:                                             
# connection = mysql://root:pass@127.0.0.1:3306/neutron
connection = mysql://neutron:99bab38803c54341@172.30.21.160/ovs_neutron
# Replace 127.0.0.1 above with the IP address of the database used by the
# main neutron server. (Leave it as is if the database runs on this host.)
# connection = sqlite://

# The SQLAlchemy connection string used to connect to the slave database
# slave_connection =

# Database reconnection retry times - in event connectivity is lost
# set to -1 implies an infinite retry count
# max_retries = 10
max_retries = 10

# Database reconnection interval in seconds - if the initial connection to the
# database fails
# retry_interval = 10
retry_interval = 10

# Minimum number of SQL connections to keep open in a pool
# min_pool_size = 1
min_pool_size = 1

# Maximum number of SQL connections to keep open in a pool
# max_pool_size = 10
max_pool_size = 10

# Timeout in seconds before idle sql connections are reaped
# idle_timeout = 3600
idle_timeout = 3600

# If set, use this value for max_overflow with sqlalchemy
# max_overflow = 20
max_overflow = 20

# Verbosity of SQL debugging information. 0=None, 100=Everything
# connection_debug = 0

# Add python stack traces to SQL as comment strings
# connection_trace = False

# If set, use this value for pool_timeout with sqlalchemy
# pool_timeout = 10

[service_providers]
# Specify service providers (drivers) for advanced services like loadbalancer, VPN, Firewall.
# Must be in form:
# service_provider=<service_type>:<name>:<driver>[:default]
# List of allowed service types includes LOADBALANCER, FIREWALL, VPN
# Combination of <service type> and <name> must be unique; <driver> must also be unique
# This is multiline option, example for default provider:
# service_provider=LOADBALANCER:name:lbaas_plugin_driver_path:default
# example of non-default provider:
# service_provider=FIREWALL:name2:firewall_driver_path
# --- Reference implementations ---
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
# In order to activate Radware's lbaas driver you need to uncomment the next line.
# If you want to keep the HA Proxy as the default lbaas driver, remove the attribute default from the line below.
# Otherwise comment the HA Proxy line
# service_provider = LOADBALANCER:Radware:neutron.services.loadbalancer.drivers.radware.driver.LoadBalancerDriver:default
# uncomment the following line to make the 'netscaler' LBaaS provider available.
# service_provider=LOADBALANCER:NetScaler:neutron.services.loadbalancer.drivers.netscaler.netscaler_driver.NetScalerPluginDriver
# Uncomment the following line (and comment out the OpenSwan VPN line) to enable Cisco's VPN driver.
# service_provider=VPN:cisco:neutron.services.vpn.service_drivers.cisco_ipsec.CiscoCsrIPsecVPNDriver:default
# Uncomment the line below to use Embrane heleos as Load Balancer service provider.
# service_provider=LOADBALANCER:Embrane:neutron.services.loadbalancer.drivers.embrane.driver.EmbraneLbaas:default

Packstack Installation Process

[root@nfv-openstack-controller ~]# packstack --answer-file=answers.txt                                                                                                            
Welcome to Installer setup utility                                                                                                                                                
Packstack changed given value  to required value /root/.ssh/id_rsa.pub                                                                                                            

Installing:
Clean Up                                             [ DONE ]
Setting up ssh keys                                  [ DONE ]
Discovering hosts' details                           [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Installing time synchronization via NTP              [ DONE ]
Adding MySQL manifest entries                        [ DONE ]
Adding AMQP manifest entries                         [ DONE ]
Adding Keystone manifest entries                     [ DONE ]
Adding Glance Keystone manifest entries              [ DONE ]
Adding Glance manifest entries                       [ DONE ]
Installing dependencies for Cinder                   [ DONE ]
Adding Cinder Keystone manifest entries              [ DONE ]
Adding Cinder manifest entries                       [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Nova API manifest entries                     [ DONE ]
Adding Nova Keystone manifest entries                [ DONE ]
Adding Nova Cert manifest entries                    [ DONE ]
Adding Nova Conductor manifest entries               [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Adding Nova Compute manifest entries                 [ DONE ]
Adding Nova Scheduler manifest entries               [ DONE ]
Adding Nova VNC Proxy manifest entries               [ DONE ]
Adding Nova Common manifest entries                  [ DONE ]
Adding Openstack Network-related Nova manifest entries[ DONE ]
Adding Neutron API manifest entries                  [ DONE ] 
Adding Neutron Keystone manifest entries             [ DONE ] 
Adding Neutron L3 manifest entries                   [ DONE ] 
Adding Neutron L2 Agent manifest entries             [ DONE ] 
Adding Neutron DHCP Agent manifest entries           [ DONE ] 
Adding Neutron LBaaS Agent manifest entries          [ DONE ] 
Adding Neutron Metadata Agent manifest entries       [ DONE ] 
Adding OpenStack Client manifest entries             [ DONE ] 
Adding Horizon manifest entries                      [ DONE ] 
Adding Swift Keystone manifest entries               [ DONE ]
Adding Swift builder manifest entries                [ DONE ]
Adding Swift proxy manifest entries                  [ DONE ]
Adding Swift storage manifest entries                [ DONE ]
Adding Swift common manifest entries                 [ DONE ]
Adding MongoDB manifest entries                      [ DONE ]
Adding Ceilometer manifest entries                   [ DONE ]
Adding Ceilometer Keystone manifest entries          [ DONE ]
Adding post install manifest entries                 [ DONE ]
Preparing servers                                    [ DONE ]
Installing Dependencies                              [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 172.30.21.161_prescript.pp
Applying 172.30.21.160_prescript.pp
Applying 172.30.21.72_prescript.pp
Applying 172.30.21.73_prescript.pp
172.30.21.161_prescript.pp:                          [ DONE ]
172.30.21.160_prescript.pp:                          [ DONE ]
172.30.21.72_prescript.pp:                           [ DONE ]
172.30.21.73_prescript.pp:                           [ DONE ]
Applying 172.30.21.161_ntpd.pp
Applying 172.30.21.160_ntpd.pp
Applying 172.30.21.72_ntpd.pp
Applying 172.30.21.73_ntpd.pp
172.30.21.161_ntpd.pp:                               [ DONE ]
172.30.21.160_ntpd.pp:                               [ DONE ]
172.30.21.73_ntpd.pp:                                [ DONE ]
172.30.21.72_ntpd.pp:                                [ DONE ]
Applying 172.30.21.160_mysql.pp
Applying 172.30.21.160_amqp.pp
172.30.21.160_mysql.pp:                              [ DONE ]
172.30.21.160_amqp.pp:                               [ DONE ]
Applying 172.30.21.160_keystone.pp
Applying 172.30.21.160_glance.pp
Applying 172.30.21.160_cinder.pp
172.30.21.160_keystone.pp:                           [ DONE ]
172.30.21.160_glance.pp:                             [ DONE ]
172.30.21.160_cinder.pp:                             [ DONE ]
Applying 172.30.21.160_api_nova.pp
172.30.21.160_api_nova.pp:                           [ DONE ]
Applying 172.30.21.160_nova.pp
Applying 172.30.21.73_nova.pp
Applying 172.30.21.72_nova.pp
172.30.21.160_nova.pp:                               [ DONE ]
172.30.21.72_nova.pp:                                [ DONE ]
172.30.21.73_nova.pp:                                [ DONE ]
Applying 172.30.21.161_neutron.pp
Applying 172.30.21.73_neutron.pp
Applying 172.30.21.72_neutron.pp
172.30.21.73_neutron.pp:                             [ DONE ]
172.30.21.161_neutron.pp:                         [ ERROR ]
Applying Puppet manifests                         [ ERROR ]

ERROR : Error appeared during Puppet run: 172.30.21.161_neutron.pp
Error: Could not start Service[neutron-server]: Execution of '/sbin/service neutron-server start' returned 1: Starting neutron: [FAILED]
You will find full trace in log **/var/tmp/packstack/20150316-185014-jLHeJy/manifests/172.30.21.161_neutron.pp.log**
Please check log file **/var/tmp/packstack/20150316-185014-jLHeJy/openstack-setup.log** for more information

Additional information:
 * Did not create a cinder volume group, one already existed
 * File /root/keystonerc_admin has been created on OpenStack client host . To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://172.30.21.160/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.

Packstack Errors

File 1 - /var/tmp/packstack/20150316-185014-jLHeJy/openstack-setup.log

[root@nfv-openstack-controller html]# cat /var/tmp/packstack/20150316-185014-jLHeJy/openstack-setup.log
2015-03-16 18:50:15::INFO::shell::78::root:: [localhost] Executing script:                             
rm -rf /var/tmp/packstack/20150316-185014-jLHeJy/manifests/*pp                                         
2015-03-16 18:50:15::INFO::shell::78::root:: [172.30.21.161] Executing script:                         
mkdir -p ~/.ssh                                                                                        
chmod 500 ~/.ssh                                                                                       
grep 'ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4UeJnWgOgRQ3HkWx5npYpa+yX8hV2q45m8MrcUwCLN+LqJB6u0uD1Pw+6sOO1oROk9xoP0Z1VtncZdvPqFjVc7SZfn17o+oNpqq/Y+nsaIvutHzmdOkMwum6JTvB2kSSckxroeuO99WH4V2B7ssIDceczlm3ve0ORbLcQzoM+1SB7ytRV6obLzTvuYoHA+DGF+th3D34pApWzMfwiRijs41ON2XcKK4vzDX2AVB5nVM22hjliIZSYsaZFtVOk5jgQRM7NlOOh3BgG+wl6qLefEaLiW2JULZ9GowcUNl8dbHOLOxfuKH9u+VYDB388MU07y1jSzb7gBw0ybBYWQt1vw== root@nfv-openstack-controller.com' ~/.ssh/authorized_keys > /dev/null 2>&1 || echo ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4UeJnWgOgRQ3HkWx5npYpa+yX8hV2q45m8MrcUwCLN+LqJB6u0uD1Pw+6sOO1oROk9xoP0Z1VtncZdvPqFjVc7SZfn17o+oNpqq/Y+nsaIvutHzmdOkMwum6JTvB2kSSckxroeuO99WH4V2B7ssIDceczlm3ve0ORbLcQzoM+1SB7ytRV6obLzTvuYoHA+DGF+th3D34pApWzMfwiRijs41ON2XcKK4vzDX2AVB5nVM22hjliIZSYsaZFtVOk5jgQRM7NlOOh3BgG+wl6qLefEaLiW2JULZ9GowcUNl8dbHOLOxfuKH9u+VYDB388MU07y1jSzb7gBw0ybBYWQt1vw== root@nfv-openstack-controller.com >> ~/.ssh/authorized_keys            
chmod 400 ~/.ssh/authorized_keys                                                                                                                                                                                                            
restorecon -r ~/.ssh                                                                                                                                                                                                                        
2015-03-16 18:50:15::INFO::shell::78::root:: [172.30.21.160] Executing script:                                                                                                                                                              
mkdir -p ~/.ssh                                                                                                                                                                                                                             
chmod 500 ~/.ssh                                                                                                                                                                                                                            
grep 'ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4UeJnWgOgRQ3HkWx5npYpa+yX8hV2q45m8MrcUwCLN+LqJB6u0uD1Pw+6sOO1oROk9xoP0Z1VtncZdvPqFjVc7SZfn17o+oNpqq/Y+nsaIvutHzmdOkMwum6JTvB2kSSckxroeuO99WH4V2B7ssIDceczlm3ve0ORbLcQzoM+1SB7ytRV6obLzTvuYoHA+DGF+th3D34pApWzMfwiRijs41ON2XcKK4vzDX2AVB5nVM22hjliIZSYsaZFtVOk5jgQRM7NlOOh3BgG+wl6qLefEaLiW2JULZ9GowcUNl8dbHOLOxfuKH9u+VYDB388MU07y1jSzb7gBw0ybBYWQt1vw== root@nfv-openstack-controller.com' ~/.ssh/authorized_keys > /dev/null 2>&1 || echo ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4UeJnWgOgRQ3HkWx5npYpa+yX8hV2q45m8MrcUwCLN+LqJB6u0uD1Pw+6sOO1oROk9xoP0Z1VtncZdvPqFjVc7SZfn17o+oNpqq/Y+nsaIvutHzmdOkMwum6JTvB2kSSckxroeuO99WH4V2B7ssIDceczlm3ve0ORbLcQzoM+1SB7ytRV6obLzTvuYoHA+DGF+th3D34pApWzMfwiRijs41ON2XcKK4vzDX2AVB5nVM22hjliIZSYsaZFtVOk5jgQRM7NlOOh3BgG+wl6qLefEaLiW2JULZ9GowcUNl8dbHOLOxfuKH9u+VYDB388MU07y1jSzb7gBw0ybBYWQt1vw== root@nfv-openstack-controller.com >> ~/.ssh/authorized_keys            
chmod 400 ~/.ssh/authorized_keys                                                                                                                                                                                                            
restorecon -r ~/.ssh                                                                                                                                                                                                                        
2015-03-16 18:50:16::INFO::shell::78::root:: [172.30.21.72] Executing script:                                                                                                                                                               
mkdir -p ~/.ssh                                                                                                                                                                                                                             
chmod 500 ~/.ssh                                                                                                                                                                                                                            
grep 'ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4UeJnWgOgRQ3HkWx5npYpa+yX8hV2q45m8MrcUwCLN+LqJB6u0uD1Pw+6sOO1oROk9xoP0Z1VtncZdvPqFjVc7SZfn17o+oNpqq/Y+nsaIvutHzmdOkMwum6JTvB2kSSckxroeuO99WH4V2B7ssIDceczlm3ve0ORbLcQzoM+1SB7ytRV6obLzTvuYoHA+DGF+th3D34pApWzMfwiRijs41ON2XcKK4vzDX2AVB5nVM22hjliIZSYsaZFtVOk5jgQRM7NlOOh3BgG+wl6qLefEaLiW2JULZ9GowcUNl8dbHOLOxfuKH9u+VYDB388MU07y1jSzb7gBw0ybBYWQt1vw== root@nfv-openstack-controller.com' ~/.ssh/authorized_keys > /dev/null 2>&1 || echo ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4UeJnWgOgRQ3HkWx5npYpa+yX8hV2q45m8MrcUwCLN+LqJB6u0uD1Pw+6sOO1oROk9xoP0Z1VtncZdvPqFjVc7SZfn17o+oNpqq/Y+nsaIvutHzmdOkMwum6JTvB2kSSckxroeuO99WH4V2B7ssIDceczlm3ve0ORbLcQzoM+1SB7ytRV6obLzTvuYoHA+DGF+th3D34pApWzMfwiRijs41ON2XcKK4vzDX2AVB5nVM22hjliIZSYsaZFtVOk5jgQRM7NlOOh3BgG+wl6qLefEaLiW2JULZ9GowcUNl8dbHOLOxfuKH9u+VYDB388MU07y1jSzb7gBw0ybBYWQt1vw== root@nfv-openstack-controller.com >> ~/.ssh/authorized_keys            
chmod 400 ~/.ssh/authorized_keys                                                                                                                                                                                                            
restorecon -r ~/.ssh                                                                                                                                                                                                                        
2015-03-16 18:50:16::INFO::shell::78::root:: [172.30.21.73] Executing script:                                                                                                                                                               
mkdir -p ~/.ssh                                                                                                                                                                                                                             
chmod 500 ~/.ssh                                                                                                                                                                                                                            
grep 'ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4UeJnWgOgRQ3HkWx5npYpa+yX8hV2q45m8MrcUwCLN+LqJB6u0uD1Pw+6sOO1oROk9xoP0Z1VtncZdvPqFjVc7SZfn17o+oNpqq/Y+nsaIvutHzmdOkMwum6JTvB2kSSckxroeuO99WH4V2B7ssIDceczlm3ve0ORbLcQzoM+1SB7ytRV6obLzTvuYoHA+DGF+th3D34pApWzMfwiRijs41ON2XcKK4vzDX2AVB5nVM22hjliIZSYsaZFtVOk5jgQRM7NlOOh3BgG+wl6qLefEaLiW2JULZ9GowcUNl8dbHOLOxfuKH9u+VYDB388MU07y1jSzb7gBw0ybBYWQt1vw== root@nfv-openstack-controller.com' ~/.ssh/authorized_keys > /dev/null 2>&1 || echo ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4UeJnWgOgRQ3HkWx5npYpa+yX8hV2q45m8MrcUwCLN+LqJB6u0uD1Pw+6sOO1oROk9xoP0Z1VtncZdvPqFjVc7SZfn17o+oNpqq/Y+nsaIvutHzmdOkMwum6JTvB2kSSckxroeuO99WH4V2B7ssIDceczlm3ve0ORbLcQzoM+1SB7ytRV6obLzTvuYoHA+DGF+th3D34pApWzMfwiRijs41ON2XcKK4vzDX2AVB5nVM22hjliIZSYsaZFtVOk5jgQRM7NlOOh3BgG+wl6qLefEaLiW2JULZ9GowcUNl8dbHOLOxfuKH9u+VYDB388MU07y1jSzb7gBw0ybBYWQt1vw== root@nfv-openstack-controller.com >> ~/.ssh/authorized_keys            
chmod 400 ~/.ssh/authorized_keys                                                                                                                                                                                                            
restorecon -r ~/.ssh                                                                                                                                                                                                                        
2015-03-16 18:50:16::INFO::shell::78::root:: [172.30.21.161] Executing script:                                                                                                                                                              
cat /etc/redhat-release                                                                                                                                                                                                                     
2015-03-16 18:50:17::INFO::shell::78::root:: [172.30.21.161] Executing script:                                                                                                                                                              
mkdir -p /var/tmp/packstack                                                                                                                                                                                                                 
mkdir --mode 0700 /var/tmp/packstack/4b6b68788bb0471e8ec3e32fad48c2d4                                                                                                                                                                       
mkdir --mode 0700 /var/tmp/packstack/4b6b68788bb0471e8ec3e32fad48c2d4/modules                                                                                                                                                               
mkdir --mode 0700 /var/tmp/packstack/4b6b68788bb0471e8ec3e32fad48c2d4/resources                                                                                                                                                             
2015-03-16 18:50:17::INFO::shell::78::root:: [172.30.21.160] Executing script:                                                                                                                                                              
cat /etc/redhat-release                                                                                                                                                                                                                     
2015-03-16 18:50:18::INFO::shell::78::root:: [172.30.21.160] Executing script:                                                                                                                                                              
mkdir -p /var/tmp/packstack                                                                                                                                                                                                                 
mkdir --mode 0700 /var/tmp/packstack/cb029fe17c1949cf860ec68f285fd446                                                                                                                                                                       
mkdir --mode 0700 /var/tmp/packstack/cb029fe17c1949cf860ec68f285fd446/modules                                                                                                                                                               
mkdir --mode 0700 /var/tmp/packstack/cb029fe17c1949cf860ec68f285fd446/resources                                                                                                                                                             
2015-03-16 18:50:18::INFO::shell::78::root:: [172.30.21.72] Executing script:                                                                                                                                                               
cat /etc/redhat-release                                                                                                                                                                                                                     
2015-03-16 18:50:18::INFO::shell::78::root:: [172.30.21.72] Executing script:                                                                                                                                                               
mkdir -p /var/tmp/packstack                                                                                                                                                                                                                 
mkdir --mode 0700 /var/tmp/packstack/a39242a07ad54ae4b09cd23f428242dd                                                                                                                                                                       
mkdir --mode 0700 /var/tmp/packstack/a39242a07ad54ae4b09cd23f428242dd/modules                                                                                                                                                               
mkdir --mode 0700 /var/tmp/packstack/a39242a07ad54ae4b09cd23f428242dd/resources                                                                                                                                                             
2015-03-16 18:50:18::INFO::shell::78::root:: [172.30.21.73] Executing script:                                                                                                                                                               
cat /etc/redhat-release                                                                                                                                                                                                                     
2015-03-16 18:50:19::INFO::shell::78::root:: [172.30.21.73] Executing script:                                                                                                                                                               
mkdir -p /var/tmp/packstack                                                                                                                                                                                                                 
mkdir --mode 0700 /var/tmp/packstack/68803234da0b48a5ada31a091f93a1a8                                                                                                                                                                       
mkdir --mode 0700 /var/tmp/packstack/68803234da0b48a5ada31a091f93a1a8/modules                                                                                                                                                               
mkdir --mode 0700 /var/tmp/packstack/68803234da0b48a5ada31a091f93a1a8/resources                                                                                                                                                             
2015-03-16 18:50:19::INFO::shell::78::root:: [172.30.21.160] Executing script:                                                                                                                                                              
rpm -q --whatprovides lvm2 || yum install -y lvm2                                                                                                                                                                                           
2015-03-16 18:50:19::INFO::shell::78::root:: [172.30.21.160] Executing script:                                                                                                                                                              
vgdisplay cinder-volumes                                                                                                                                                                                                                    
2015-03-16 18:50:20::INFO::shell::78::root:: [172.30.21.160] Executing script:                                                                                                                                                              
sed -i -r "s/^ *snapshot_autoextend_threshold +=.*/    snapshot_autoextend_threshold = 80/" /etc/lvm/lvm.conf                                                                                                                               
sed -i -r "s/^ *snapshot_autoextend_percent +=.*/    snapshot_autoextend_percent = 20/" /etc/lvm/lvm.conf                                                                                                                                   
2015-03-16 18:50:21::INFO::shell::78::root:: [localhost] Executing script:                                                                                                                                                                  
ssh-keygen -t rsa -b 2048 -f "/var/tmp/packstack/20150316-185014-jLHeJy/nova_migration_key" -N ""                                                                                                                                           
2015-03-16 18:50:21::INFO::shell::78::root:: [localhost] Executing script:                                                                                                                                                                  
ssh-keyscan 172.30.21.73                                                                                                                                                                                                                    
2015-03-16 18:50:21::INFO::shell::78::root:: [localhost] Executing script:                                                                                                                                                                  
ssh-keyscan 172.30.21.72                                                                                                                                                                                                                    
2015-03-16 18:50:21::INFO::shell::78::root:: [localhost] Executing script:                                                                                                                                                                  
echo $HOME                                                                                                                                                                                                                                  
2015-03-16 18:50:21::INFO::shell::78::root:: [172.30.21.72] Executing script:                                                                                                                                                               
rpm -q --whatprovides yum-utils || yum install -y yum-utils                                                                                                                                                                                 
2015-03-16 18:50:21::INFO::shell::78::root:: [172.30.21.72] Executing script:                                                                                                                                                               
yum-config-manager --disable epel                                                                                                                                                                                                           
2015-03-16 18:50:22::WARNING::serverprep_949::124::root:: Failed to set EPEL repo on host 172.30.21.72:                                                                                                                                     
This is OK in case you don't want EPEL installed and enabled.                                                                                                                                                                               
2015-03-16 18:50:22::INFO::shell::34::root:: Executing command:                                                                                                                                                                             
rpm -q rdo-release --qf='%{version}-%{release}.%{arch}                                                                                                                                                                                      
'                                                                                                                                                                                                                                           
2015-03-16 18:50:22::INFO::shell::78::root:: [172.30.21.72] Executing script:                                                                                                                                                               
yum clean metadata                                                                                                                                                                                                                          
2015-03-16 18:50:22::INFO::shell::78::root:: [172.30.21.160] Executing script:                                                                                                                                                              
rpm -q --whatprovides yum-utils || yum install -y yum-utils                                                                                                                                                                                 
2015-03-16 18:50:23::INFO::shell::78::root:: [172.30.21.160] Executing script:                                                                                                                                                              
yum-config-manager --disable epel                                                                                                                                                                                                           
2015-03-16 18:50:23::WARNING::serverprep_949::124::root:: Failed to set EPEL repo on host 172.30.21.160:                                                                                                                                    
This is OK in case you don't want EPEL installed and enabled.                                                                                                                                                                               
2015-03-16 18:50:23::INFO::shell::34::root:: Executing command:                                                                                                                                                                             
rpm -q rdo-release --qf='%{version}-%{release}.%{arch}                                                                                                                                                                                      
'                                                                                                                                                                                                                                           
2015-03-16 18:50:23::INFO::shell::78::root:: [172.30.21.160] Executing script:                                                                                                                                                              
yum clean metadata                                                                                                                                                                                                                          
2015-03-16 18:50:24::INFO::shell::78::root:: [172.30.21.161] Executing script:                                                                                                                                                              
rpm -q --whatprovides yum-utils || yum install -y yum-utils                                                                                                                                                                                 
2015-03-16 18:50:25::INFO::shell::78::root:: [172.30.21.161] Executing script:                                                                                                                                                              
yum-config-manager --disable epel                                                                                                                                                                                                           
2015-03-16 18:50:25::WARNING::serverprep_949::124::root:: Failed to set EPEL repo on host 172.30.21.161:                                                                                                                                    
This is OK in case you don't want EPEL installed and enabled.                                                                                                                                                                               
2015-03-16 18:50:25::INFO::shell::34::root:: Executing command:                                                                                                                                                                             
rpm -q rdo-release --qf='%{version}-%{release}.%{arch}                                                                                                                                                                                      
'                                                                                                                                                                                                                                           
2015-03-16 18:50:26::INFO::shell::78::root:: [172.30.21.161] Executing script:                                                                                                                                                              
yum clean metadata                                                                                                                                                                                                                          
2015-03-16 18:50:26::INFO::shell::78::root:: [172.30.21.73] Executing script:                                                                                                                                                               
rpm -q --whatprovides yum-utils || yum install -y yum-utils                                                                                                                                                                                 
2015-03-16 18:50:26::INFO::shell::78::root:: [172.30.21.73] Executing script:                                                                                                                                                               
yum-config-manager --disable epel                                                                                                                                                                                                           
2015-03-16 18:50:27::WARNING::serverprep_949::124::root:: Failed to set EPEL repo on host 172.30.21.73:                                                                                                                                     
This is OK in case you don't want EPEL installed and enabled.                                                                                                                                                                               
2015-03-16 18:50:27::INFO::shell::34::root:: Executing command:                                                                                                                                                                             
rpm -q rdo-release --qf='%{version}-%{release}.%{arch}                                                                                                                                                                                      
'                                                                                                                                                                                                                                           
2015-03-16 18:50:27::INFO::shell::78::root:: [172.30.21.73] Executing script:                                                                                                                                                               
yum clean metadata                                                                                                                                                                                                                          
2015-03-16 18:50:27::INFO::shell::78::root:: [localhost] Executing script:                                                                                                                                                                  
rpm -q --requires openstack-puppet-modules | egrep -v "^(rpmlib|\/|perl)"                                                                                                                                                                   
2015-03-16 18:50:27::INFO::shell::78::root:: [172.30.21.72] Executing script:                                                                                                                                                               
rpm -q --whatprovides puppet || yum install -y puppet                                                                                                                                                                                       
rpm -q --whatprovides openssh-clients || yum install -y openssh-clients                                                                                                                                                                     
rpm -q --whatprovides tar || yum install -y tar                                                                                                                                                                                             
rpm -q --whatprovides nc || yum install -y nc                                                                                                                                                                                               
rpm -q --whatprovides rubygem-json || yum install -y rubygem-json                                                                                                                                                                           
2015-03-16 18:50:28::INFO::shell::78::root:: [172.30.21.160] Executing script:                                                                                                                                                              
rpm -q --whatprovides puppet || yum install -y puppet                                                                                                                                                                                       
rpm -q --whatprovides openssh-clients || yum install -y openssh-clients                                                                                                                                                                     
rpm -q --whatprovides tar || yum install -y tar                                                                                                                                                                                             
rpm -q --whatprovides nc || yum install -y nc                                                                                                                                                                                               
rpm -q --whatprovides rubygem-json || yum install -y rubygem-json
2015-03-16 18:50:28::INFO::shell::78::root:: [172.30.21.161] Executing script:
rpm -q --whatprovides puppet || yum install -y puppet
rpm -q --whatprovides openssh-clients || yum install -y openssh-clients
rpm -q --whatprovides tar || yum install -y tar
rpm -q --whatprovides nc || yum install -y nc
rpm -q --whatprovides rubygem-json || yum install -y rubygem-json
2015-03-16 18:50:29::INFO::shell::78::root:: [172.30.21.73] Executing script:
rpm -q --whatprovides puppet || yum install -y puppet
rpm -q --whatprovides openssh-clients || yum install -y openssh-clients
rpm -q --whatprovides tar || yum install -y tar
rpm -q --whatprovides nc || yum install -y nc
rpm -q --whatprovides rubygem-json || yum install -y rubygem-json
2015-03-16 18:50:29::INFO::shell::78::root:: [localhost] Executing script:
cd /usr/lib/python2.6/site-packages/packstack/puppet
cd /var/tmp/packstack/20150316-185014-jLHeJy/manifests
tar --dereference -cpzf - ../manifests | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@172.30.21.72 tar -C /var/tmp/packstack/a39242a07ad54ae4b09cd23f428242dd -xpzf -
cd /usr/share/openstack-puppet/modules
tar --dereference -cpzf - apache ceilometer certmonger cinder concat firewall glance heat horizon inifile keystone memcached mongodb mysql neutron nova nssdb openstack openstacklib packstack qpid rabbitmq rsync ssh stdlib swift sysctl tempest vcsrepo vlan vswitch xinetd | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@172.30.21.72 tar -C /var/tmp/packstack/a39242a07ad54ae4b09cd23f428242dd/modules -xpzf -
cd /usr/lib/python2.6/site-packages/packstack/puppet
cd /var/tmp/packstack/20150316-185014-jLHeJy/manifests
tar --dereference -cpzf - ../manifests | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@172.30.21.160 tar -C /var/tmp/packstack/cb029fe17c1949cf860ec68f285fd446 -xpzf -
cd /usr/share/openstack-puppet/modules
tar --dereference -cpzf - apache ceilometer certmonger cinder concat firewall glance heat horizon inifile keystone memcached mongodb mysql neutron nova nssdb openstack openstacklib packstack qpid rabbitmq rsync ssh stdlib swift sysctl tempest vcsrepo vlan vswitch xinetd | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@172.30.21.160 tar -C /var/tmp/packstack/cb029fe17c1949cf860ec68f285fd446/modules -xpzf -
cd /usr/lib/python2.6/site-packages/packstack/puppet
cd /var/tmp/packstack/20150316-185014-jLHeJy/manifests
tar --dereference -cpzf - ../manifests | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@172.30.21.161 tar -C /var/tmp/packstack/4b6b68788bb0471e8ec3e32fad48c2d4 -xpzf -
cd /usr/share/openstack-puppet/modules
tar --dereference -cpzf - apache ceilometer certmonger cinder concat firewall glance heat horizon inifile keystone memcached mongodb mysql neutron nova nssdb openstack openstacklib packstack qpid rabbitmq rsync ssh stdlib swift sysctl tempest vcsrepo vlan vswitch xinetd | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@172.30.21.161 tar -C /var/tmp/packstack/4b6b68788bb0471e8ec3e32fad48c2d4/modules -xpzf -
cd /usr/lib/python2.6/site-packages/packstack/puppet
cd /var/tmp/packstack/20150316-185014-jLHeJy/manifests
tar --dereference -cpzf - ../manifests | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@172.30.21.73 tar -C /var/tmp/packstack/68803234da0b48a5ada31a091f93a1a8 -xpzf -
cd /usr/share/openstack-puppet/modules
tar --dereference -cpzf - apache ceilometer certmonger cinder concat firewall glance heat horizon inifile keystone memcached mongodb mysql neutron nova nssdb openstack openstacklib packstack qpid rabbitmq rsync ssh stdlib swift sysctl tempest vcsrepo vlan vswitch xinetd | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@172.30.21.73 tar -C /var/tmp/packstack/68803234da0b48a5ada31a091f93a1a8/modules -xpzf -
2015-03-16 18:54:04::ERROR::run_setup::892::root:: Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 887, in main
    _main(confFile)
  File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 574, in _main
    runSequences()
  File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 553, in runSequences
    controller.runAllSequences()
  File "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", line 84, in runAllSequences
    sequence.run(self.CONF)
  File "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", line 96, in run
    step.run(config=config)
  File "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", line 43, in run
    raise SequenceError(str(ex))
SequenceError: Error appeared during Puppet run: 172.30.21.161_neutron.pp
Error: Could not start Service[neutron-server]: Execution of '/sbin/service neutron-server start' returned 1: Starting neutron: [FAILED]
You will find full trace in log /var/tmp/packstack/20150316-185014-jLHeJy/manifests/172.30.21.161_neutron.pp.log

2015-03-16 18:54:04::INFO::shell::78::root:: [172.30.21.160] Executing script:
rm -rf /var/tmp/packstack/cb029fe17c1949cf860ec68f285fd446
2015-03-16 18:54:05::INFO::shell::78::root:: [172.30.21.72] Executing script:
rm -rf /var/tmp/packstack/a39242a07ad54ae4b09cd23f428242dd
2015-03-16 18:54:05::INFO::shell::78::root:: [172.30.21.161] Executing script:
rm -rf /var/tmp/packstack/4b6b68788bb0471e8ec3e32fad48c2d4
2015-03-16 18:54:06::INFO::shell::78::root:: [172.30.21.73] Executing script:
rm -rf /var/tmp/packstack/68803234da0b48a5ada31a091f93a1a8

File 2 - /var/tmp/packstack/20150316-185014-jLHeJy/manifests/172.30.21.161_neutron.pp.log

[root@nfv-openstack-controller html]# cat /var/tmp/packstack/20150316-185014-jLHeJy/manifests/172.30.21.161_neutron.pp.log
Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults
Warning: Scope(Class[Neutron::Server]): The sql_connection parameter is deprecated, use database_connection instead.
Warning: Scope(Class[Neutron::Agents::L3]): The neutron::l3_agent::newtork_device_mtu parameter is deprecated, use neutron::newtork_device_mtu instead.
Notice: Compiled catalog for nfv-openstack-neutron.com in environment production in 3.46 seconds
Warning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false.
   (at /usr/lib/ruby/site_ruby/1.8/puppet/type.rb:816:in `set_default')
Notice: /Stage[main]/Main/Packstack::Firewall[neutron_server_172.30.21.161_]/Firewall[001 neutron server incoming neutron_server_172.30.21.161_]/source: source changed '10.138.254.100/32' to '10.138.254.200/32'
Notice: Firewall[001 neutron server incoming neutron_server_172.30.21.161_](provider=iptables): Properties changed - updating rule
Notice: /Stage[main]/Neutron::Server::Notifications/Nova_admin_tenant_id_setter[nova_admin_tenant_id]/ensure: created
Notice: /Stage[main]/Main/Exec[neutron-db-manage upgrade]/returns: executed successfully
Notice: /Stage[main]/Neutron::Agents::Ovs/Service[neutron-plugin-ovs-service]/ensure: ensure changed 'stopped' to 'running'
Notice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]/ensure: ensure changed 'stopped' to 'running'
Notice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]/ensure: ensure changed 'stopped' to 'running'
Error: Could not start Service[neutron-server]: Execution of '/sbin/service neutron-server start' returned 1: Starting neutron: [FAILED]
Wrapped exception:
Execution of '/sbin/service neutron-server start' returned 1: Starting neutron: [FAILED]
Error: /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: change from stopped to running failed: Could not start Service[neutron-server]: Execution of '/sbin/service neutron-server start' returned 1: Starting neutron: [FAILED]
Notice: /Stage[main]/Neutron::Server/Service[neutron-server]: Triggered 'refresh' from 1 events
Notice: Finished catalog run in 8.63 seconds