Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

instance get more than one fixed ip(grizzly-g3)

i have install grizzly-g3, but quantum does not work well, when i boot 128 instances, i found one of instances got more than one fixed ip, howerver, when i boot 64 intances, it nerver happened, besides that , sometimes i can not ping vm with floatingip, i did not find any error message in my quantum log( all the files in the /var/log/quantum), follows are the error output and configurations

| 97a93600-38e2-4700-9851-15ef56c1d628 | slave | ACTIVE | demo-int-net= | | 99aeb6b8-4252-4839-a7d1-f87853116100 | slave | ACTIVE | demo-int-net= | | 9aa82a35-c9f1-4f44-a108-d14e74eec231 | slave | ACTIVE | demo-int-net=, | | 9b6b1289-c450-4614-b647-e5ebdffff80a | slave | ACTIVE | demo-int-net= | | 9e0d3aa5-0f15-4b24-944a-6d6c3e18ce64 | slave | ACTIVE | demo-int-net= | | 9ea62124-9128-43cc-acdd-142f1e7743d6 | slave | ACTIVE | demo-int-net= |

my setup : one db host(db service), one glance host(glance service), on api host(keystone,nova-api,nova-scheduler, nova-conductor, quantum-server,quantum-dhcp, quantum-l3-agent,quantum-plugin-openvswitch-agent), eight compute host(each host with nova-compute, quantum-plugin-openvswitch-agent), i check that all the service on all hosts works well

i used vlan type network and openvswitch plugin:

my quantum.conf


Default log level is INFO

verbose and debug has the same result.

One of them will set DEBUG log level output

debug = True

Address to bind the API server

bind_host =

Port the bind the API server to

bind_port = 9696

Quantum plugin provider module

core_plugin =

core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2

Advanced service modules

service_plugins =

Paste configuration file

api_paste_config = /etc/quantum/api-paste.ini

The strategy to be used for auth.

Supported values are 'keystone'(default), 'noauth'.

auth_strategy = keystone

Modules of exceptions that are permitted to be recreated

upon receiving exception data from an rpc call.

allowed_rpc_exception_modules = quantum.openstack.common.exception, nova.exception

AMQP exchange to connect to if using RabbitMQ or QPID

control_exchange = quantum

RPC driver. DHCP agents needs it.

notification_driver = quantum.openstack.common.notifier.rpc_notifier

default_notification_level is used to form actual topic name(s) or to set logging level

default_notification_level = INFO

Defined in rpc_notifier, can be comma separated values.

The actual topic names will be %s.%(default_notification_level)s

notification_topics = notifications


resource name(s) that are supported in quota features

quota_items = network,subnet,port

default number of resource allowed per tenant, minus for unlimited

default_quota = -1

number of networks allowed per tenant, and minus means unlimited

quota_network = 10

number of subnets allowed per tenant, and minus means unlimited

quota_subnet = 10

number of ports allowed per tenant, and minus means unlimited

quota_port = 5000 quota_floatingip = 5000

default driver to use for quota checks

quota_driver = quantum.quota.ConfDriver

=========== items for agent management extension =============

Seconds to regard the agent as down.

agent_down_time = 5

=========== end of items for agent management extension =====


Description of the default service type (optional)

description = "default service type"

Enter a service definition line for each advanced service provided

by the default service type.

Each service definition should be in the following format:



If set to true this allows quantum to receive proxied security group calls from nova

proxy_mode = False

[AGENT] root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf

=========== items for agent management extension =============

seconds between nodes reporting state to server, should be less than


report_interval = 4

=========== end of items for agent management extension =====

[keystone_authtoken] auth_host = host-keystone auth_port = 35357 auth_protocol = http admin_tenant_name = demoTenant admin_user = test admin_password = 123456 signing_dir = /var/lib/quantum/keystone-signing

my dhcp_agent.ini


Where to store dnsmasq state files. This directory must be writable by the

user executing the agent.

state_path = /var/lib/quantum

OVS based plugins(OVS, Ryu, NEC, NVP, BigSwitch/Floodlight)

interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver

The agent can use other DHCP drivers. Dnsmasq is the simplest and requires

no additional setup of the DHCP server.

dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq

my ovs_quantum_plugin.ini configure file


This line MUST be changed to actually run the plugin.

sql_connection = mysql://quantum:quantum@host-db/quantum

Database reconnection interval in seconds - if the initial connection to the

database fails

reconnect_interval = 2


(StrOpt) Type of network to allocate for tenant networks. The

default value 'local' is useful only for single-box testing and

provides no connectivity between hosts. You MUST either change this

to 'vlan' and configure network_vlan_ranges below or change this to

'gre' and configure tunnel_id_ranges below in order for tenant

networks to provide connectivity between hosts. Set to 'none' to

disable creation of tenant networks.


network_vlan_ranges = DemoNet:1:4094

bridge_mappings = DemoNet:DemoBridge


Agent's polling interval in seconds

polling_interval = 2


when i execute "quantum router-gateway-set" to add external network id as the router gateway, i found that the status of port for external network id in the router is DOWN. does it matters, if it does, how can i fixed it.

this blocked me for serveral days , can someone help me sovle it, any help will be appreciated.