First time here? Check out the FAQ!
2015-07-22 09:07:42 -0600 | received badge | ● Enthusiast |
2014-06-18 04:46:30 -0600 | received badge | ● Good Answer (source) |
2014-06-10 21:56:07 -0600 | received badge | ● Nice Answer (source) |
2014-06-10 19:45:06 -0600 | received badge | ● Teacher (source) |
2014-06-10 14:21:18 -0600 | answered a question | Why In Openstack Neutron with 'GRE' configured, two saparete OVS bridges ('br_tun' and 'br_int') are used rather than using a single OVS bridge? If you take a close look at these two bridges (ovs-vsctl show), you'll see that they are connected to each other using a 'virtual' patch port. The reason why both exist (and are virtually connected) becomes clearer when you dump the openflow table on one of the hypervisors (ovs-ofctl dump-flows br-tun). Depending on the number of VM's you are running on this compute host, there will be corresponding openflow rules to translate VLAN ID's to GRE tunnels. This would look something like: For return traffic, you'll see another openflow rule that ensures that the GRE to br-int (local) VLAN mapping can be performed as well. The patch cable is again used to ensure the now VLAN tagged traffic arrives back at the guest. The following OpenFlow entry ensures that ingress traffic with the specified destination MAC will get tagged with VLAN 1 before getting patched to br-int. For further information, check out this excellent document (my examples came from here): http://openstack.redhat.com/Networking_in_too_much_detail (http://openstack.redhat.com/Networkin...) And for further details on ovs patch ports: http://blog.scottlowe.org/2012/11/27/connecting-ovs-bridges-with-patch-ports/ (http://blog.scottlowe.org/2012/11/27/...) |
2014-05-29 07:49:46 -0600 | commented question | Unable to upload the images into glance - VOVA You really need to post relevant log information from /var/log/glance on the system where glance-api and glance-registry are running. Can you do this? It helps to run the glance client with |
2014-05-28 14:53:28 -0600 | received badge | ● Supporter (source) |
2014-05-28 14:48:43 -0600 | commented question | Instance gets dhcp IP but fails to connect to metadata server Can you post the contents of metadata_agent.ini also? |
2014-05-28 10:09:22 -0600 | answered a question | Neutron Metadata Agent could not find Service or Region This is a real issue, but whether or not it can be ignored depends on your desire to fetch metadata into your newly provisioned instance. Also, this will prohibit cloud-init (see https://help.ubuntu.com/community/UEC/Images (https://help.ubuntu.com/community/UEC...) ) enabled instances from getting auto-configured. More than likely metadata_agent.ini on the Neutron server isn't properly configured. For example, I've seen the exact error message you describe above when auth_region (regionOne in the Ubuntu guide you are using) doesn't match what you already configured with Keystone. Note, this is also case sensitive. Run keystone endpoint-list and look at the region column to see if you've got it right. |