Ask Your Question

slogan621's profile - activity

2019-09-16 21:44:04 -0500 received badge  Famous Question (source)
2019-09-16 21:44:04 -0500 received badge  Notable Question (source)
2019-05-11 21:10:02 -0500 received badge  Popular Question (source)
2019-04-05 16:00:22 -0500 received badge  Associate Editor (source)
2019-04-05 14:48:34 -0500 asked a question DVR and assignment of FIP gateway IP address

Hi,

I have a routing requirement that requires each compute node that is connected to the fabric to have a FIP gateway that has had its IP address allocated from a distinct subnet. For example, the FIP gw address is assigned from 11.1.0.0/26 for compute node 1, 11.2.0.0/26 for compute node 2, and so on.

I'm aware of the following:

openstack subnet create demo-floating-ip-agent-gateway-subnet \
  --subnet-range 198.51.100.0/24 --no-dhcp \
  --service-type 'network:floatingip_agent_gateway' \
  --network demo-ext-net

Which means that for me, it might be something like this that gets me the subnets:

openstack subnet create demo-floating-ip-agent-gateway-subnet-1 \
  --subnet-range 11.1.0.0/26 --no-dhcp \
  --service-type 'network:floatingip_agent_gateway' \
  --network demo-ext-net

openstack subnet create demo-floating-ip-agent-gateway-subnet-2 \
  --subnet-range 11.2.0.0/26 --no-dhcp \
  --service-type 'network:floatingip_agent_gateway' \  
  --network demo-ext-net

But then the question is, if the above is legitimate, how do I make the association (compute node 1, allocate the FIP gw address using demo-floating-ip-agent-gateway-subnet-1, compute node 2, allocate the FIP gw address using demo-floating-ip-agent-gateway-subnet-2, etc..)?

Alternately, does someone have experience in using DVR in CLOS based networks that can provide some insight about configurations that support DVR? Or is DVR not a good solution, old school vRouting or something else is needed to provide support for FIP?

2016-11-29 11:47:51 -0500 received badge  Good Answer (source)
2015-08-13 08:49:44 -0500 received badge  Enlightened (source)
2015-08-13 08:49:44 -0500 received badge  Guru (source)
2015-03-28 03:09:53 -0500 received badge  Great Answer (source)
2014-12-03 17:00:27 -0500 received badge  Good Answer (source)
2014-10-21 07:31:37 -0500 received badge  Nice Answer (source)
2014-10-20 15:23:41 -0500 answered a question What's the difference between flat, gre and vlan Neutron network types?

Some additional comments to add to what larsks answered - In a flat network, everyone shares the same network segment. For example, say 2 tenants are sharing the cluster, and this segment is 10.4.128.0/20 - VM1 from tenant 1 might get assigned 10.4.128.3, VM1 from tenant 2 might get 10.4.128.4, and so on. This means that tenant 1 can see the traffic from tenant 2. Not a good thing in most cases.

In a VLAN network, tenants are separated because each is assigned to a VLAN. In OpenVSwitch plugin (or ML2 with OVS driver), OVS will in the virtual switches allocate an internal VLAN for each tenant. If you mix in a hardware plugin like the Cisco Nexus plugin, it will be asked to allocate VLANs as well. These VLANs provide separation amongst the tenants (as VLANs are designed to do). It also means that tenants can specify the same subnet and overlap in that subnet range - VM1 from tenant 1 can get assigned IP 10.4.128.3 and VM1 from tenant 2 can also get 10.4.128.3, without conflict. This makes life easier for administrators because they don't have to worry about tenants that want the same subnet and address allocations, because the VLANs keep them separate.

GRE segmenation (and VXLAN) also provides separation among tenants, and also allows overlapping subnets and IP ranges. It does this by encapsulating tenant traffic in tunnels. Say your tenant has VMs running on compute nodes A, B, and C. Neutron (along with OVS) will build a fully connected mesh of tunnels between all of these machines, and create a tunnel bridge on each of these nodes that is used to direct traffic from VMs into and out of these tunnels. If a VM on machine A wants to send packets to a VM on machine B, machine A will encapsulate the IP packets coming out of the VM using a segmentation ID that is generated for the tenant by OpenStack, and the receiving machine (B) will decapsulate the packets and route them to the destination VM using the addressing information in the ethernet frame.

GRE and VXLAN scale better than VLAN, and while VLAN based networking probably has its applications (you might be integrating with a infrastructure that is VLAN-based to begin with), I have found GRE/VXLAN based OVS setups to be easier to deploy and debug than VLAN based setups (one reason is you can use a dumb switch to connect all the physical hosts), and so my feeling is you want to start there if you have a deployment scenario that involves multiple tenants and you want to allow for overlapping network segments and IP address ranges in your tenants.

2014-09-08 16:48:07 -0500 commented answer devstack, multi-tenant, and metadata service

I should add that the error message related to inability to reach 169.254.169.254 was "no route". The route table in one of the VMs now shows this problem to have been fixed:

ci-info: |   2   | 169.254.169.254 | 10.0.0.4 | 255.255.255.255 |    eth0   |  U
2014-09-08 16:40:19 -0500 answered a question devstack, multi-tenant, and metadata service

Ok, well, I seem to have an answer after a lot of brute force experimentation.

The way I solved this was by adjusting the localrc files on both controller, and changing how I create tenants/users and launch VMs. The key was to stop running all command line apps authenticated as "admin", but run some of them authenticated as the user (and associated tenant) that applied for the command being run. For example, launching VMs.

Here is my new controller local.conf, the key change was to disable q-l3 service, and remove MULTI_HOST = True (I was really confused by that, it doesn't have anything to do with running on more than one cluster node, but has to do with were services are available -- I want all services to run only on the controller).

[[local|localrc]]
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-meta
enable_service neutron
disable_service q-l3

HOST_IP=192.168.3.3
FLAT_INTERFACE=eth4
FIXED_RANGE=10.4.128.0/24
NETWORK_GATEWAY=10.4.128.1
#FLOATING_RANGE=192.168.3.128/25
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=xyzpdqlazydog
EXTRA_OPTS=(metadata_host=$HOST_IP)

Q_ML2_TENANT_NETWORK_TYPE=vxlan
SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
IMAGE_URLS="http://cloud-images.ubuntu.com/quantal/current/quantal-server-cloudimg-amd64-disk1.img"
Q_PLUGIN=ml2
ACTIVE_TIMEOUT=120
ASSOCIATE_TIMEOUT=60
BOOT_TIMEOUT=120
SERVICE_TIMEOUT=120
OFFLINE=True
ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGES=1:1000
ENABLE_TENANT_VLANS=False

Q_DHCP_EXTRA_DEFAULT_OPTS=(enable_metadata_network=True enable_isolated_metadata=True)

Note I incorporated rosella-o's suggestion by passing args via Q_DHCP_EXTRA_DEFAULT_OPTS - note the inclusion in addition to his suggestion of enable_isolated_metadata, because the comments in the .conf file said I needed to have that enabled also. I'm going to +1 his suggestion since (as best I can tell) it is a contributing factor, but I really had other dragons to slay before I could see things work.

For the compute node (note removed MULTI_HOST=True here too):

[[local|localrc]]
HOST_IP=192.168.3.4
FLAT_INTERFACE=eth4
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
NETWORK_GATEWAY=10.4.128.1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=xyzpdqlazydog
DATABASE_TYPE=mysql
SERVICE_HOST=192.168.3.3
MYSQL_HOST=192.168.3.3
RABBIT_HOST=192.168.3.3
GLANCE_HOSTPORT=192.168.3.3:9292

Q_PLUGIN=ml2

# Timeouts
ACTIVE_TIMEOUT=120
ASSOCIATE_TIMEOUT=60
BOOT_TIMEOUT=120
SERVICE_TIMEOUT=120
OFFLINE=True

ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGES=1:1000
ENABLE_TENANT_VLANS=False

Q_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLED_SERVICES=n-cpu,rabbit,neutron,n-api,q-agt

EXTRA_OPTS=(metadata_host=$SERVICE_HOST)

The next (I think major) change was to create separate tenants, users, and make sure to authenticate against them as the specific user for operations applied for the user (as opposed to what I was doing which was passing os-username = admin and os-password = admin to command line apps as arguments). I also needed to do the following:

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 2> /dev/null
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 2> /dev/null
nova secgroup-add-rule default udp 1 65535 0.0.0.0 ...
(more)
2014-08-28 19:09:48 -0500 received badge  Famous Question (source)
2014-08-11 13:37:01 -0500 received badge  Notable Question (source)
2014-08-03 16:15:22 -0500 received badge  Popular Question (source)
2014-07-25 17:22:50 -0500 edited question devstack, multi-tenant, and metadata service

I have a multi-host, stable/havana era cluster (based on git checkout 1b9859ca78e7ec2b751026dfcb1327fe7076c4f8 of devstack), one controller, one compute node (the controller also has compute running on it so VMs can be scheduled either on the controller node, or the compute node).

If I launch VMs using the demo tenant created by devstack, I have no problem getting VMs to reach the metadata service to obtain instance IDs. All is good, and I worked out the kinks of single tenant quite a while ago.

Where I run into problems is multi-tenant. If I try creating VMs using the admin user/tenant, or try creating a second user/tenant with roles identical to "demo" (I call it demo2), same thing - metadata service cannot be reached. In the console log of the cirros VM, see something like this:

checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 9.14. request failed
failed 2/20: up 14.32. request failed

Surely someone has tried out multi-tenancy with devstack. The API descriptions for keystone are clear enough and I basically am creating the demo2 tenant and user using the same steps as devstack's lib/keystone script. I get no errors from keystone, all the VMs launch happily, and I see no reason if I create the demo2 user/tenant exactly the same way as demo is created, why it wouldn't work. For that matter, pretty stumped why admin user/tenant also can't reach metadata. Only possible answer is metadata is somehow bound to the demo tenant in someway, but that would seem silly given metadata service surely should be usable by multiple tenants, not just one.

Here is the core of the code I run after running stack.sh (first on the controller node, then on the compute node), to create the user2 tenant and user

...

keystone --os-username admin --os-password password tenant-create --name demo2 --description 'demo2'
demo2=`keystone --os-username admin --os-password password \
      tenant-list | grep " demo2 " | cut -c3-34`
invis=`keystone --os-username admin --os-password password \
      tenant-list | grep "invis" | cut -c3-34`
echo "demo2 tenant is $demo2"
echo "invis tenant is $invis"
keystone --os-username admin --os-password password user-create --name demo2 --pass password --email demo2@example.com
admin_role=`keystone --os-username admin --os-password password \
      role-list | grep " admin " | cut -c3-34`
member_role=`keystone --os-username admin --os-password password \
      role-list | grep " Member " | cut -c3-34`
another_role=`keystone --os-username admin --os-password password \
      role-list | grep " anotherrole " | cut -c3-34`
echo "Member role is $member_role"
echo "Admin role is $admin_role"
echo "Another role is $another_role"
demo2user=`keystone --os-username admin --os-password password \
      user-list | grep " demo2 " | cut -c3-34`
adminuser=`keystone --os-username admin --os-password password \
      user-list | grep " admin " | cut -c3-34`
echo "demo2 user is $demo2user"
keystone --os-username admin --os-password password user-role-add --user $demo2user --role $member_role --tenant $demo2
keystone --os-username admin --os-password password user-role-add --user $adminuser --role $admin_role --tenant $demo2
keystone --os-username admin --os-password password user-role-add --user $demo2user --role $another_role --tenant $demo2
keystone --os-username admin --os-password password user-role-add --user $demo2user --role $member_role --tenant $invis

...

The code from devstack that inspired the above is:

 create_keystone_accounts() {

    # admin
    ADMIN_TENANT=$(keystone ...
(more)
2014-07-25 17:20:03 -0500 received badge  Enthusiast
2014-05-23 20:23:01 -0500 received badge  Great Question (source)
2013-11-20 12:07:29 -0500 received badge  Favorite Question (source)
2013-11-20 07:07:14 -0500 received badge  Good Question (source)
2013-11-19 05:46:39 -0500 received badge  Nice Question (source)
2013-11-18 12:08:49 -0500 commented answer ML2 neutron plugin installation and configuration

vxlan works great. One or both of the following was needed to fix it. 1) Make sure OVS 1.10.2 or new is installed. I fixed this by reimaging with Ubuntu 13.10. 2) I recloned devstack and switch to stable/havana branch (git checkout stable/havana) before running devstack.

2013-11-18 12:03:09 -0500 commented answer Neutron: VXLAN in Havana

Thanks. In this case, I wasn't using a GUI to install and configure. Rather, I was using devstack, which is a project that provides a set of shell scripts that are designed to install and configure OpenStack nodes. Turns out my bug was a bad OVS install, or an issue fixed by pulling latest havana.

2013-11-17 05:30:14 -0500 received badge  Nice Answer (source)
2013-11-17 00:30:45 -0500 received badge  Famous Question (source)
2013-11-16 05:31:54 -0500 received badge  Teacher (source)
2013-11-15 21:17:27 -0500 answered a question Neutron: VXLAN in Havana

It's working :-) The lowdown was the OVS installation on the compute node was messed up somehow, or switching to stable/havana fixed the problem. I re-imaged my blade with Ubuntu 13.10 and re-ran stack using the same localrc I was already using and now my VMs are able to dhcp and get IP addresses.

2013-11-15 16:02:10 -0500 received badge  Notable Question (source)
2013-11-15 14:24:53 -0500 received badge  Popular Question (source)
2013-11-15 12:57:26 -0500 commented answer Neutron: VXLAN in Havana

Thanks for the response. Updated the original question with the info you requested. Note again the ovs-vsctl show output is similar to what I get for GRE except for the different interface type on the GRE/VXLAN port. I'm going to rerun GRE and look at flows for diffs. What do you expect to see?

2013-11-15 03:41:48 -0500 received badge  Student (source)
2013-11-14 20:58:14 -0500 asked a question Neutron: VXLAN in Havana

Has anyone been able to successfully run either OVS or ML2 with VXLAN networking and can help me understand what pitfalls I might run into in doing so?

I have a stable/havana cluster that works great with GRE running in both OVS and ML2 (I've since switch to ML2, chasing issues in a deprecated plugin just seems like a fool's endeavour :-)

I'm devstack based and basically follow the directions on the ML2 wiki

I see the usual br-tun, br-int, and associated vxlan port on both controller and compute node, e.g.,

sudo ovs-vsctl show
a426636d-4a2d-4a25-bc5b-8945409e1cab
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-192.168.3.3"
            Interface "vxlan-192.168.3.3"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.2", out_key=flow, remote_ip="192.168.3.3"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Port "qvob86969c3-c8"
            tag: 1
            Interface "qvob86969c3-c8"
        Port "qvo9da1c5b9-44"
            tag: 1
            Interface "qvo9da1c5b9-44"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo9c11361a-7f"
            tag: 1
            Interface "qvo9c11361a-7f"
        Port "qvo483db35f-ca"
            tag: 1
            Interface "qvo483db35f-ca"
    ovs_version: "1.10.2"

This mirrors what I would see for GRE (which works) except of course the interface type on the tunnel port would be "gre". As you can see, I'm using OVS 1.10.2 which has VXLAN support (bigger picture is I am based on Ubuntu 13.10 saucy which supplies that version of OVS).

I can fire up VMs and see VMs on the compute node try to get dhcp via the console log and fail, and I also see vxlan encapsulated packets arriving on the controller from the compute node during the process. I thought maybe there might be an MTU issue after poking around on google for discussions of VXLAN and openvswitch/ml2/neutron, but inspecting interfaces (both ovs and physical) I see no indication of packet drops (at least, no reports if packet loss using ipconfig or ovs-dpctl) on either the compute or controller node (and I am guessing if MTU were an issue, I'd be seeing packet drops (right?)).

I'm still getting my feet wet with VXLAN and I'm also still learning about Linux networking, but have for the better part of the year had little problem with GRE in folsom, grizzly, and (now) havana, so thinking I'm just missing something that may be obvious to others regarding VXLAN - clearly just telling ML2 that I want to use VXLAN isn't sufficient.

ovs-vsctl from the controller:

5ee4d85c-f0c9-4ccc-be1a-a4ea685c1c8e
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "qvoc7ea9e28-d8"
            tag: 1
            Interface "qvoc7ea9e28-d8"
        Port "qvoe6f6e5fa-31"
            tag: 1
            Interface "qvoe6f6e5fa-31"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo24371d66-7a"
            tag: 1
            Interface "qvo24371d66-7a"
        Port "qvoe9f82b47-b3"
            tag: 1
            Interface "qvoe9f82b47-b3"
        Port "tapbbdf3a9d-c9"
            tag: 1
            Interface "tapbbdf3a9d-c9"
                type: internal
    Bridge br-tun
        Port "vxlan-192.168.3.2"
            Interface "vxlan-192.168.3.2"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.3", out_key=flow, remote_ip="192.168 ...
(more)
2013-11-14 20:37:20 -0500 answered a question ML2 neutron plugin installation and configuration

With devstack, it was pretty easy (and since it worked, I feel justified in posting the answer for those using devstack). Just added

Q_PLUGIN=ml2

to the localrc, replacing the previous value of Q_PLUGIN (for me it was openvswitch).

GRE worked with only that change. VXLAN is not working for me, after setting:

Q_ML2_TENANT_NETWORK_TYPE=vxlan

I'm still trying to track that down. Could be some issue with fragementation/MTUs. I see vxlan packets flying around, but my guest VMs can't do dhcp and thus come up with no IP address on the veth.

More info on this page: https://wiki.openstack.org/wiki/Neutron/ML2