Ask Your Question

timoteoverde's profile - activity

2015-07-22 09:07:42 -0500 received badge  Enthusiast
2014-06-18 04:46:30 -0500 received badge  Good Answer (source)
2014-06-10 21:56:07 -0500 received badge  Nice Answer (source)
2014-06-10 19:45:06 -0500 received badge  Teacher (source)
2014-06-10 14:21:18 -0500 answered a question Why In Openstack Neutron with 'GRE' configured, two saparete OVS bridges ('br_tun' and 'br_int') are used rather than using a single OVS bridge?

If you take a close look at these two bridges (ovs-vsctl show), you'll see that they are connected to each other using a 'virtual' patch port. The reason why both exist (and are virtually connected) becomes clearer when you dump the openflow table on one of the hypervisors (ovs-ofctl dump-flows br-tun). Depending on the number of VM's you are running on this compute host, there will be corresponding openflow rules to translate VLAN ID's to GRE tunnels. This would look something like:

cookie=0x0, duration=422.357s, table=0, n_packets=82, n_bytes=10443, idle_age=31, priority=4,in_port=1,dl_vlan=1 actions=set_tunnel:0x2,NORMAL

For return traffic, you'll see another openflow rule that ensures that the GRE to br-int (local) VLAN mapping can be performed as well. The patch cable is again used to ensure the now VLAN tagged traffic arrives back at the guest. The following OpenFlow entry ensures that ingress traffic with the specified destination MAC will get tagged with VLAN 1 before getting patched to br-int.

 cookie=0x0, duration=421.948s, table=0, n_packets=64, n_bytes=8337, idle_age=31, priority=3,tun_id=0x2,dl_dst=fa:16:3e:dd:c1:62 actions=mod_vlan_vid:1,NORMAL

For further information, check out this excellent document (my examples came from here): (

And for further details on ovs patch ports: (

2014-05-29 07:49:46 -0500 commented question Unable to upload the images into glance - VOVA

You really need to post relevant log information from /var/log/glance on the system where glance-api and glance-registry are running. Can you do this? It helps to run the glance client with --debug also.

2014-05-28 14:53:28 -0500 received badge  Supporter (source)
2014-05-28 14:48:43 -0500 commented question Instance gets dhcp IP but fails to connect to metadata server

Can you post the contents of metadata_agent.ini also?

2014-05-28 10:09:22 -0500 answered a question Neutron Metadata Agent could not find Service or Region

This is a real issue, but whether or not it can be ignored depends on your desire to fetch metadata into your newly provisioned instance. Also, this will prohibit cloud-init (see ( ) enabled instances from getting auto-configured.

More than likely metadata_agent.ini on the Neutron server isn't properly configured. For example, I've seen the exact error message you describe above when auth_region (regionOne in the Ubuntu guide you are using) doesn't match what you already configured with Keystone. Note, this is also case sensitive. Run keystone endpoint-list and look at the region column to see if you've got it right.