Ask Your Question
2

instance get more than one fixed ip(grizzly-g3)

asked 2013-03-26 01:09:38 -0600

xinxin-shu gravatar image

i have install grizzly-g3, but quantum does not work well, when i boot 128 instances, i found one of instances got more than one fixed ip, howerver, when i boot 64 intances, it nerver happened, besides that , sometimes i can not ping vm with floatingip, i did not find any error message in my quantum log( all the files in the /var/log/quantum), follows are the error output and configurations

| 97a93600-38e2-4700-9851-15ef56c1d628 | slave | ACTIVE | demo-int-net=172.16.100.4 | | 99aeb6b8-4252-4839-a7d1-f87853116100 | slave | ACTIVE | demo-int-net=172.16.100.117 | | 9aa82a35-c9f1-4f44-a108-d14e74eec231 | slave | ACTIVE | demo-int-net=172.16.100.108, 172.16.100.109 | | 9b6b1289-c450-4614-b647-e5ebdffff80a | slave | ACTIVE | demo-int-net=172.16.100.5 | | 9e0d3aa5-0f15-4b24-944a-6d6c3e18ce64 | slave | ACTIVE | demo-int-net=172.16.100.35 | | 9ea62124-9128-43cc-acdd-142f1e7743d6 | slave | ACTIVE | demo-int-net=172.16.100.132 |

my setup : one db host(db service), one glance host(glance service), on api host(keystone,nova-api,nova-scheduler, nova-conductor, quantum-server,quantum-dhcp, quantum-l3-agent,quantum-plugin-openvswitch-agent), eight compute host(each host with nova-compute, quantum-plugin-openvswitch-agent), i check that all the service on all hosts works well

i used vlan type network and openvswitch plugin:

my quantum.conf

[DEFAULT]

Default log level is INFO

verbose and debug has the same result.

One of them will set DEBUG log level output

debug = True

Address to bind the API server

bind_host = 0.0.0.0

Port the bind the API server to

bind_port = 9696

Quantum plugin provider module

core_plugin =

core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2

Advanced service modules

service_plugins =

Paste configuration file

api_paste_config = /etc/quantum/api-paste.ini

The strategy to be used for auth.

Supported values are 'keystone'(default), 'noauth'.

auth_strategy = keystone

Modules of exceptions that are permitted to be recreated

upon receiving exception data from an rpc call.

allowed_rpc_exception_modules = quantum.openstack.common.exception, nova.exception

AMQP exchange to connect to if using RabbitMQ or QPID

control_exchange = quantum

RPC driver. DHCP agents needs it.

notification_driver = quantum.openstack.common.notifier.rpc_notifier

default_notification_level is used to form actual topic name(s) or to set logging level

default_notification_level = INFO

Defined in rpc_notifier, can be comma separated values.

The actual topic names will be %s.%(default_notification_level)s

notification_topics = notifications

[QUOTAS]

resource name(s) that are supported in quota features

quota_items = network,subnet,port

default number of resource allowed per tenant, minus for unlimited

default_quota = -1

number of networks allowed per tenant, and minus means unlimited

quota_network = 10

number of subnets allowed per tenant, and minus means unlimited

quota_subnet = 10

number of ports allowed per tenant, and minus means unlimited

quota_port = 5000 quota_floatingip = 5000

default driver to use for quota checks

quota_driver = quantum.quota.ConfDriver

=========== items for agent management extension =============

Seconds to regard the agent as down.

agent_down_time = 5

=========== end of items for agent management extension =====

[DEFAULT_SERVICETYPE]

Description of the default service type (optional)

description = "default service type"

Enter a service definition line for each advanced service provided

by the default service type.

Each service definition should be in the following format:

<service>:<plugin>[:driver]

[SECURITYGROUP]

If set to true this allows quantum to receive proxied security group calls from nova

proxy_mode = False

[AGENT] root_helper = sudo quantum-rootwrap /etc/quantum ... (more)

edit retag flag offensive close merge delete

31 answers

Sort by ยป oldest newest most voted
1

answered 2013-07-03 17:28:02 -0600

Hi Dan,

I filled this blueprint a while ago but haven't gotten a chance to start implementing it https://blueprints.launchpad.net/nova... . I think this will help solve a lot these orchestration issues we have with how nova and quantum work together. What i'm planning for the blueprint is to basically force the clients to create a port in quantum first; then pass the port-id to nova when booting an instance (and to keep the api backwords compatitble allocate the port on the nova-api server if a network is passed in). This should make things simplier and help with this imo. Here is the discuession we had on the list a while ago about this: http://lists.openstack.org/pipermail/...

edit flag offensive delete link more
0

answered 2013-06-28 21:13:26 -0600

I know that we're chiming in on this a bit late, but we recently ran into this issue in our (grizzly+, nvp backed) environment as well.

Thanks for the patch ( https://review.openstack.org/#/c/32691/ ). We will experiment with the new configuration settings and see if this helps us avoid doubly-allocated ports.

I'd like to note that the patch above seems to only be a means of working around the underlying issue -- which AFAICT is coordination of state between nova and neutron. The failure of a discrete api call due to a timeout or some other issue should always be expected and duplicate/retry api calls should be able to be handled in an idempotent manner by the resource manager. Maybe I missed some part of the discussion where this was called out as a larger issue that needs to be addressed and the patch above is just an expedient fix, or perhaps I could be entirely off-base with my understanding of the issue.

Are there any ongoing (or even dusty!) discussions around the way state is managed between openstack services?

edit flag offensive delete link more
0

answered 2013-04-12 07:13:46 -0600

Okay, but why is nova requesting that two ports be created just in this case when it should be asking for only one port?

edit flag offensive delete link more
0

answered 2013-04-12 05:37:46 -0600

danwent gravatar image

Sumit, this is because the order is actually the other way around. Think of it as being that nova contacts quantum to create ports, then when the nova virt layer is actually creating the VM, it looks at how many ports exist for instance X, and creates that many items in the XML. Since both ports are created with the same instance-id as the port's device_id, the underlying virt layer is tricked into thinking there are supposed to be two interfaces.

edit flag offensive delete link more
0

answered 2013-04-12 04:33:38 -0600

Yes, there is an exception due to time out, and the earlier VM creation failed. But this is the log from the same node, so I am not sure how scheduling on another HV plays into this.

To me, the basic question is why is nova creating two interfaces for this VM (if you see the libvirt conf for this VM). It's not the case that nova creates one interface, but results in two ports being on the Quantum side. Nova is actually creating two interfaces and requesting two ports for those.

edit flag offensive delete link more
0

answered 2013-04-12 04:21:37 -0600

Interesting, your nova-compute log shows there was a timeout. Vish was saying that he was thinking perhaps the port was already created (a timeout) vm rescheduled on another HV -- another port created.

edit flag offensive delete link more
0

answered 2013-04-12 04:21:34 -0600

Interesting, your nova-compute log shows there was a timeout. Vish was saying that he was thinking perhaps the port was already created (a timeout) vm rescheduled on another HV -- another port created.

edit flag offensive delete link more
0

answered 2013-04-12 04:08:38 -0600

Nova-compute log: http://paste.openstack.org/show/35783/

Quantum log: http://paste.openstack.org/show/35880/

Instance and port details via CLI: http://paste.openstack.org/show/35881/

edit flag offensive delete link more
0

answered 2013-04-11 15:23:43 -0600

danwent gravatar image

Ok, could you possibly post the portions of the relevant nova-compute + quantum logs, along with a port-show for each of the duplicate ports (or some other way of us seeing the UUIDs of the duplicate ports, and the instance id of the vm)? thanks.

On Thu, Apr 11, 2013 at 12:36 AM, Sumit Naiksatam < question225158@answers.launchpad.net > wrote:

Question #225158 on quantum changed: https://answers.launchpad.net/quantum/+question/225158 (https://answers.launchpad.net/quantum...)

Sumit Naiksatam posted a new comment: Here is the nova log as we reproduced this: http://paste.openstack.org/show/35783/

Sorry Dan, I could not add any additional log statements in this run, but I am looking at the setup.


You received this question notification because you are an answer contact for quantum.


~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dan Wendlandt Nicira, Inc: http://www.nicira.com twitter: danwendlandt ~~~~~~~~~~~~~~~~~~~~~~~~~~~

edit flag offensive delete link more
0

answered 2013-04-11 07:35:37 -0600

Here is the nova log as we reproduced this: http://paste.openstack.org/show/35783/

Sorry Dan, I could not add any additional log statements in this run, but I am looking at the setup.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2013-03-26 01:09:38 -0600

Seen: 469 times

Last updated: Jul 03 '13