Ask Your Question

Diego Lima's profile - activity

2016-09-05 09:21:54 -0500 received badge  Great Answer (source)
2015-01-08 23:24:44 -0500 received badge  Good Answer (source)
2014-09-04 11:40:51 -0500 received badge  Enthusiast
2014-08-21 13:24:15 -0500 received badge  Famous Question (source)
2014-08-21 12:15:47 -0500 answered a question ERROR nova.virt.libvirt.driver [-] Nova requires libvirt version 0.9.6 or greater

Make sure your compute nodes and API server are running the same version of Nova packages.

2014-08-21 11:23:41 -0500 commented answer Problem running multiple L3 instances

I tried leaving gateway_external_network_id and external_network_bridge empty on L3's configuration and starting a single agent. I have also added IP addresses from both external networks but L3 doesn't seem to add the ports to the correct one (it only adds ports on br-ex-2, not on br-ex)

2014-08-21 06:44:58 -0500 commented answer Problem running multiple L3 instances


Unfortunately I couldn't use the L3 agent without declaring a specific external network. My environment needs to use two different network interfaces, one handling each external network, and each L3 needs to know which interface it needs to bind to.

2014-08-19 19:25:45 -0500 received badge  Notable Question (source)
2014-08-19 12:45:54 -0500 received badge  Popular Question (source)
2014-08-19 11:26:55 -0500 asked a question Problem running multiple L3 instances


I've got a setup where two different external networks are presented to openstack instances. Since the external networks are on different interfaces and have different purposes, I've set up two different L3 agents, each configured to handle a specific external network following these directions:

With both agents running I can create routers (the namespaces do show correctly, including ip address assignment), but whenever I add a port to an internal subnet I get the following error on neutron-server.log:

2014-08-19 11:25:16.362 11934 WARNING neutron.plugins.ml2.managers [req-28d1ad31-8e08-4069-8804-98291964fb35 None] Failed to bind port 222d22d0-e4c6-4af4-a9e1-94be80d18213 on host sonic-agent2

2014-08-19 11:25:16.366 11934 WARNING neutron.plugins.ml2.plugin [req-28d1ad31-8e08-4069-8804-98291964fb35 None] In _notify_port_updated(), no bound segment for port 222d22d0-e4c6-4af4-a9e1-94be80d18213 on network cab446c7-d29f-462f-8a9d-e48604efc02c

2014-08-19 11:25:19.341 11934 WARNING neutron.plugins.ml2.rpc [req-8e138bb7-74fa-49d1-93a1-db1c72533a8a None] Device 222d22d0-e4c6-4af4-a9e1-94be80d18213 requested by agent ovsc81f66b95137 on network cab446c7-d29f-462f-8a9d-e48604efc02c not bound, vif_type: binding_failed

The result is that instances inside the networks cannot reach the router (nor can I ping instances from inside the router namespace). The router's public address, however, works just fine.

The configuration files for the agents are: - agent 1 - agent 2

My neutron.conf looks like this:

L2 agent plugin settings (ml2 using GRE):

UPDATE: I have given up on the "host" parameter on the L3 agents. With this I do not have port binding errors but the agents occasionally stop responding until they are restarted.

2014-07-02 09:12:22 -0500 received badge  Nice Answer (source)
2014-06-25 17:41:28 -0500 received badge  Self-Learner (source)
2014-06-13 07:29:07 -0500 received badge  Famous Question (source)
2014-06-05 20:38:51 -0500 commented answer Internal Server error during "nova image-list" request

Hi! I believe you need to upgrade your glance packages to the latest version as well. I took the oportunity to upgrade all services on my management node (including glance-api and glance-registry).

I did not upgrade any packages on any other servers (compute nor dedicated network nodes)

2014-06-04 15:09:37 -0500 received badge  Notable Question (source)
2014-06-01 14:31:36 -0500 received badge  Popular Question (source)
2014-05-30 12:46:28 -0500 received badge  Scholar (source)
2014-05-30 12:46:21 -0500 answered a question Internal Server error during "nova image-list" request

Answering myself so if anyone run into the same problem they won't be at a loss:

I've managed to fix the problem by upgrading the nova packages to the latest version (1:2013.2.3-0ubuntu1~cloud0) from the repository.

2014-05-30 12:26:56 -0500 commented question Internal Server error during "nova image-list" request

Update: I've compared this environment with another site that I'm running, where everything is working fine. I've noticed that the request sent to nova-api on the functioning environment doesn't include the "marker" parameter.

2014-05-30 10:52:53 -0500 asked a question Internal Server error during "nova image-list" request

Hi! I've got an openstack environment (Havana on Ubuntu 12.04) that has been working for some time and now I can't list images using "nova image-list". Using "glance image-list" or "glance index" works just fine.

Nova, however, returns an internal server error message:

# nova image-list
ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-b3bca21a-0f98-4796-bde2-c7ed555aed68)

I've managed to track the error to Glance Registry, which seems to be rolling back a transaction after receiving an error from the database (further details regarding the query on the pastebin file at the end of the post):

2014-05-30 12:37:32.880 18921 INFO sqlalchemy.engine.base.Engine [-] ROLLBACK

On the database logs (I'm using Postgresql 9.3) I get the following error message:

2014-05-30 12:41:47 BRT ERROR:  invalid input syntax for type timestamp: "" at character 4184

I can also get details about the image that seems to be causing the error using "glance image-show 45db5061-9f61-460a-b78d-386fd70e9b5c" (Details on pastebin) and on the database:

glance=# select * from images where id = '45db5061-9f61-460a-b78d-386fd70e9b5c';
45db5061-9f61-460a-b78d-386fd70e9b5c | 117_AmbariSlave | 3270443008 | active | f         | 2014-05-13 13:37:54.803997 | 2014-05-13 13:43:24.105433 |            | f       | qcow2       | bare             | 0b00c947551e073af5510c9f3dc8ab3f | cb581997b681459f88d9f8b01dc0ebcc |       40 |    4096 | f


Comparing this site with another one where everything is working fine I've noticed the request sent to the Nova API on the working one does not include the parameter marker:


On the site where "nova image-list" doesn't work it includes a marker parameter on the request:


While this is obviously causing the problem (manually repeating the request without the marker works), I have no idea as to where this marker parameter is coming from.

Log Files:

Postgresql Error Log:

Glance Registry Log:

Nova API Log:

Image details:

2014-02-11 18:32:10 -0500 received badge  Good Question (source)
2014-02-10 07:20:56 -0500 received badge  Critic
2014-02-10 07:14:12 -0500 answered a question Assigning Floating IP fails.

It looks like you created a network but didn't create a router to connect it to your external network. In order to assign a floating IP to an instance you need the following:

  1. Properly configured external network
  2. Properly configured internal network
  3. Router that has a gateway set on your external network and a port in your internal network
  4. Instance launched in your internal network (and whose gateway is the above mentioned router)

In your case the external network is probably "a03ebfb2-f897-4181-bf14-109f507eecac" and your internal network is "881be3de-bd37-4672-aa9f-1ee18c7f0fd4".

The whole process of creating an internal network and adding the router is like this:

$ neutron net-create samplenetwork
$ neutron subnet-create samplenetwork
$ neutron router-create samplerouter
$ neutron router-gateway-set samplerouter externalnet
$ neutron router-interface-add  samplerouter subnetID

On the example above externalnet is your external network and you'd replace subnetID with the ID of the subnet you just created.

After doing that you should launch an instance into that subnet and then associate the floating IP.

2014-02-05 03:58:00 -0500 received badge  Enlightened (source)
2014-02-05 03:57:53 -0500 received badge  Good Answer (source)
2014-02-04 07:54:51 -0500 received badge  Nice Answer (source)
2014-02-04 07:54:12 -0500 received badge  Teacher (source)
2014-02-04 07:32:29 -0500 answered a question How do I create a floating IP pool


You can create an external network (and add a subnet with the IP pool) using these 2 commands (assuming you're using neutron due to your post tags):

# neutron net-create ext-net -- --router:external=True SPECIAL_OPTIONS
# neutron subnet-create ext-net --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END --gateway=EXTERNAL_INTERFACE_GATEWAY --enable_dhcp=False EXTERNAL_INTERFACE_CIDR

If you're using nova-network you can add the pool like this:

# nova-manage floating create --pool=nova --ip_range=

You can read more about this on these 2 links:

2014-01-31 10:50:21 -0500 received badge  Nice Question (source)
2013-12-27 13:20:34 -0500 received badge  Student (source)
2013-12-23 18:06:24 -0500 received badge  Famous Question (source)
2013-12-23 16:22:00 -0500 commented question No network communication on a 3 node Havana setup

I've managed to make some progress: I connected a second interface to each node and added them to br-int. Now I can see packets arrive on network node, although I still get no response from DHCP. I'll rebuild network settings and post an update to all relevant files soon.

2013-12-23 16:05:21 -0500 commented question No network communication on a 3 node Havana setup

I've manually removed br-ex from the network node (ovs-vsctl del-br br-ex). There are no more references to br-ex on any nodes. I've also added your suggested auth settings to all neutron.conf files. Results are still the same: I see DHCP requests on the compute node but not on network node.

2013-12-23 15:26:34 -0500 commented question No network communication on a 3 node Havana setup

I've made the suggested changes but still get no networking. Upon launching my vm I can see some traffic (dhcp requests) on the compute node's br-int interface, but no traffic on the network node. I'll edit my main post and place the tcpdump results here. I've also updated pastebin's conf files.

2013-12-23 15:21:51 -0500 received badge  Notable Question (source)
2013-12-23 14:51:05 -0500 received badge  Popular Question (source)
2013-12-23 14:12:01 -0500 commented question No network communication on a 3 node Havana setup

I've added the nova.conf files to the description (both on the compute and management nodes)

2013-12-23 14:11:10 -0500 edited question No network communication on a 3 node Havana setup

I've been struggling with neutron for the past few days and I can't get instances to receive addresses using DHCP or reach anything outside the same physical host. My setup consists of three nodes, all running Ubuntu Server 12.04 with Havana installed from Cloudarchive (per ):

  • Network node: Connected to the WAN (eth0) and LAN (eth1, networks running neutron dhcp-agent, l3-agent, metadata-agent and plugin-openvswitch-agent.
  • Management node: Connected only to the LAN network (eth0, and running neutron (server, plugin-openvswitch-agent), postgresql, rabbitmq, keystone, glance, nova (api, conductor, cert, vnc proxy and scheduler) and horizon.
  • Compute node: Connected only to the LAN network (eth0, and running nova-compute and neutron-plugin-openvswitch-agent

Everything aside from networking works as expected so far, and two instances running on the same compute node are able to reach each other if I log on using the console and assign an address manually.

On the neutron server log (management node) I can see the following message whenever I launch an instance:

2013-12-23 12:52:13.151 862 WARNING neutron.db.agentschedulers_db [-] Fail scheduling network {'status': u'ACTIVE', 'subnets': [u'be9659ca-1ee7-4c35-b36c-d082a581495f'], 'name': u'Alpha', 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id': u'04968d54151d4bb29d477f754e099728', 'provider:network_type': u'local', 'router:external': False, 'shared': False, 'id': u'031dcf57-c259-41f2-be2a-1541f88d3238', 'provider:segmentation_id': None}

General info:

Configuration files and logs follow:

Network node:

Compute node:

Management Node:

Tcpdump output when launching a new instance: