2019-09-16 21:44:04 -0500 | 获得奖牌 | ● 著名问题 (source) |
2019-09-16 21:44:04 -0500 | 获得奖牌 | ● 热门的问题 (source) |
2019-05-11 21:10:02 -0500 | 获得奖牌 | ● 受欢迎的问题 (source) |
2019-04-05 16:00:22 -0500 | 获得奖牌 | ● 主编 (source) |
2019-04-05 14:48:34 -0500 | 问了问题 | DVR and assignment of FIP gateway IP address Hi, I have a routing requirement that requires each compute node that is connected to the fabric to have a FIP gateway that has had its IP address allocated from a distinct subnet. For example, the FIP gw address is assigned from 11.1.0.0/26 for compute node 1, 11.2.0.0/26 for compute node 2, and so on. I'm aware of the following: Which means that for me, it might be something like this that gets me the subnets: But then the question is, if the above is legitimate, how do I make the association (compute node 1, allocate the FIP gw address using demo-floating-ip-agent-gateway-subnet-1, compute node 2, allocate the FIP gw address using demo-floating-ip-agent-gateway-subnet-2, etc..)? Alternately, does someone have experience in using DVR in CLOS based networks that can provide some insight about configurations that support DVR? Or is DVR not a good solution, old school vRouting or something else is needed to provide support for FIP? |
2016-11-29 11:47:51 -0500 | 获得奖牌 | ● 优秀回答 (source) |
2015-08-13 08:49:44 -0500 | 获得奖牌 | ● 启发 (source) |
2015-08-13 08:49:44 -0500 | 获得奖牌 | ● 专家 (source) |
2015-03-28 03:09:53 -0500 | 获得奖牌 | ● 完美答案 (source) |
2014-12-03 17:00:27 -0500 | 获得奖牌 | ● 优秀回答 (source) |
2014-10-21 07:31:37 -0500 | 获得奖牌 | ● 好回答 (source) |
2014-10-20 15:23:41 -0500 | 回答问题 | What's the difference between flat, gre and vlan Neutron network types? Some additional comments to add to what larsks answered - In a flat network, everyone shares the same network segment. For example, say 2 tenants are sharing the cluster, and this segment is 10.4.128.0/20 - VM1 from tenant 1 might get assigned 10.4.128.3, VM1 from tenant 2 might get 10.4.128.4, and so on. This means that tenant 1 can see the traffic from tenant 2. Not a good thing in most cases. In a VLAN network, tenants are separated because each is assigned to a VLAN. In OpenVSwitch plugin (or ML2 with OVS driver), OVS will in the virtual switches allocate an internal VLAN for each tenant. If you mix in a hardware plugin like the Cisco Nexus plugin, it will be asked to allocate VLANs as well. These VLANs provide separation amongst the tenants (as VLANs are designed to do). It also means that tenants can specify the same subnet and overlap in that subnet range - VM1 from tenant 1 can get assigned IP 10.4.128.3 and VM1 from tenant 2 can also get 10.4.128.3, without conflict. This makes life easier for administrators because they don't have to worry about tenants that want the same subnet and address allocations, because the VLANs keep them separate. GRE segmenation (and VXLAN) also provides separation among tenants, and also allows overlapping subnets and IP ranges. It does this by encapsulating tenant traffic in tunnels. Say your tenant has VMs running on compute nodes A, B, and C. Neutron (along with OVS) will build a fully connected mesh of tunnels between all of these machines, and create a tunnel bridge on each of these nodes that is used to direct traffic from VMs into and out of these tunnels. If a VM on machine A wants to send packets to a VM on machine B, machine A will encapsulate the IP packets coming out of the VM using a segmentation ID that is generated for the tenant by OpenStack, and the receiving machine (B) will decapsulate the packets and route them to the destination VM using the addressing information in the ethernet frame. GRE and VXLAN scale better than VLAN, and while VLAN based networking probably has its applications (you might be integrating with a infrastructure that is VLAN-based to begin with), I have found GRE/VXLAN based OVS setups to be easier to deploy and debug than VLAN based setups (one reason is you can use a dumb switch to connect all the physical hosts), and so my feeling is you want to start there if you have a deployment scenario that involves multiple tenants and you want to allow for overlapping network segments and IP address ranges in your tenants. |
2014-09-08 16:48:07 -0500 | 评论回答 | devstack, multi-tenant, and metadata service I should add that the error message related to inability to reach 169.254.169.254 was "no route". The route table in one of the VMs now shows this problem to have been fixed: ci-info: | 2 | 169.254.169.254 | 10.0.0.4 | 255.255.255.255 | eth0 | U |
2014-09-08 16:40:19 -0500 | 回答问题 | devstack, multi-tenant, and metadata service Ok, well, I seem to have an answer after a lot of brute force experimentation. The way I solved this was by adjusting the localrc files on both controller, and changing how I create tenants/users and launch VMs. The key was to stop running all command line apps authenticated as "admin", but run some of them authenticated as the user (and associated tenant) that applied for the command being run. For example, launching VMs. Here is my new controller local.conf, the key change was to disable q-l3 service, and remove MULTI_HOST = True (I was really confused by that, it doesn't have anything to do with running on more than one cluster node, but has to do with were services are available -- I want all services to run only on the controller). [[local|localrc]] disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-meta enable_service neutron disable_service q-l3 HOST_IP=192.168.3.3 FLAT_INTERFACE=eth4 FIXED_RANGE=10.4.128.0/24 NETWORK_GATEWAY=10.4.128.1 #FLOATING_RANGE=192.168.3.128/25 LOGFILE=/opt/stack/logs/stack.sh.log ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=xyzpdqlazydog EXTRA_OPTS=(metadata_host=$HOST_IP) Q_ML2_TENANT_NETWORK_TYPE=vxlan SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler IMAGE_URLS="http://cloud-images.ubuntu.com/quantal/current/quantal-server-cloudimg-amd64-disk1.img" Q_PLUGIN=ml2 ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=False Q_DHCP_EXTRA_DEFAULT_OPTS=(enable_metadata_network=True enable_isolated_metadata=True) Note I incorporated rosella-o's suggestion by passing args via Q_DHCP_EXTRA_DEFAULT_OPTS - note the inclusion in addition to his suggestion of enable_isolated_metadata, because the comments in the .conf file said I needed to have that enabled also. I'm going to +1 his suggestion since (as best I can tell) it is a contributing factor, but I really had other dragons to slay before I could see things work. For the compute node (note removed MULTI_HOST=True here too): [[local|localrc]] HOST_IP=192.168.3.4 FLAT_INTERFACE=eth4 FIXED_RANGE=10.4.128.0/20 FIXED_NETWORK_SIZE=4096 NETWORK_GATEWAY=10.4.128.1 LOGFILE=/opt/stack/logs/stack.sh.log ADMIN_PASSWORD=password MYSQL_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=xyzpdqlazydog DATABASE_TYPE=mysql SERVICE_HOST=192.168.3.3 MYSQL_HOST=192.168.3.3 RABBIT_HOST=192.168.3.3 GLANCE_HOSTPORT=192.168.3.3:9292 Q_PLUGIN=ml2 # Timeouts ACTIVE_TIMEOUT=120 ASSOCIATE_TIMEOUT=60 BOOT_TIMEOUT=120 SERVICE_TIMEOUT=120 OFFLINE=True ENABLE_TENANT_TUNNELS=True TENANT_TUNNEL_RANGES=1:1000 ENABLE_TENANT_VLANS=False Q_ML2_TENANT_NETWORK_TYPE=vxlan ENABLED_SERVICES=n-cpu,rabbit,neutron,n-api,q-agt EXTRA_OPTS=(metadata_host=$SERVICE_HOST) The next (I think major) change was to create separate tenants, users, and make sure to authenticate against them as the specific user for operations applied for the user (as opposed to what I was doing which was passing os-username = admin and os-password = admin to command line apps as arguments). I also needed to do the following: nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 2> /dev/null nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 2> /dev/null nova secgroup-add-rule default udp 1 65535 0.0.0.0 ...(more) |
2014-08-28 19:09:48 -0500 | 获得奖牌 | ● 著名问题 (source) |
2014-08-11 13:37:01 -0500 | 获得奖牌 | ● 热门的问题 (source) |
2014-08-03 16:15:22 -0500 | 获得奖牌 | ● 受欢迎的问题 (source) |
2014-07-25 17:22:50 -0500 | 修改问题 | devstack, multi-tenant, and metadata service I have a multi-host, stable/havana era cluster (based on git checkout 1b9859ca78e7ec2b751026dfcb1327fe7076c4f8 of devstack), one controller, one compute node (the controller also has compute running on it so VMs can be scheduled either on the controller node, or the compute node). If I launch VMs using the demo tenant created by devstack, I have no problem getting VMs to reach the metadata service to obtain instance IDs. All is good, and I worked out the kinks of single tenant quite a while ago. Where I run into problems is multi-tenant. If I try creating VMs using the admin user/tenant, or try creating a second user/tenant with roles identical to "demo" (I call it demo2), same thing - metadata service cannot be reached. In the console log of the cirros VM, see something like this: Surely someone has tried out multi-tenancy with devstack. The API descriptions for keystone are clear enough and I basically am creating the demo2 tenant and user using the same steps as devstack's lib/keystone script. I get no errors from keystone, all the VMs launch happily, and I see no reason if I create the demo2 user/tenant exactly the same way as demo is created, why it wouldn't work. For that matter, pretty stumped why admin user/tenant also can't reach metadata. Only possible answer is metadata is somehow bound to the demo tenant in someway, but that would seem silly given metadata service surely should be usable by multiple tenants, not just one. Here is the core of the code I run after running stack.sh (first on the controller node, then on the compute node), to create the user2 tenant and user The code from devstack that inspired the above is: (more) |
2014-07-25 17:20:03 -0500 | 获得奖牌 | ● 粉丝 |
2014-05-23 20:23:01 -0500 | 获得奖牌 | ● 伟大问题 (source) |
2013-11-20 12:07:29 -0500 | 获得奖牌 | ● 收藏问题 (source) |
2013-11-20 07:07:14 -0500 | 获得奖牌 | ● 优秀问题 (source) |
2013-11-19 05:46:39 -0500 | 获得奖牌 | ● 好问题 (source) |
2013-11-18 12:08:49 -0500 | 评论回答 | ML2 neutron plugin installation and configuration vxlan works great. One or both of the following was needed to fix it. 1) Make sure OVS 1.10.2 or new is installed. I fixed this by reimaging with Ubuntu 13.10. 2) I recloned devstack and switch to stable/havana branch (git checkout stable/havana) before running devstack. |
2013-11-18 12:03:09 -0500 | 评论回答 | Neutron: VXLAN in Havana Thanks. In this case, I wasn't using a GUI to install and configure. Rather, I was using devstack, which is a project that provides a set of shell scripts that are designed to install and configure OpenStack nodes. Turns out my bug was a bad OVS install, or an issue fixed by pulling latest havana. |
2013-11-17 05:30:14 -0500 | 获得奖牌 | ● 好回答 (source) |
2013-11-17 00:30:45 -0500 | 获得奖牌 | ● 著名问题 (source) |
2013-11-16 05:31:54 -0500 | 获得奖牌 | ● 老师 (source) |
2013-11-15 21:17:27 -0500 | 回答问题 | Neutron: VXLAN in Havana It's working :-) The lowdown was the OVS installation on the compute node was messed up somehow, or switching to stable/havana fixed the problem. I re-imaged my blade with Ubuntu 13.10 and re-ran stack using the same localrc I was already using and now my VMs are able to dhcp and get IP addresses. |
2013-11-15 16:02:10 -0500 | 获得奖牌 | ● 热门的问题 (source) |
2013-11-15 14:24:53 -0500 | 获得奖牌 | ● 受欢迎的问题 (source) |
2013-11-15 12:57:26 -0500 | 评论回答 | Neutron: VXLAN in Havana Thanks for the response. Updated the original question with the info you requested. Note again the ovs-vsctl show output is similar to what I get for GRE except for the different interface type on the GRE/VXLAN port. I'm going to rerun GRE and look at flows for diffs. What do you expect to see? |
2013-11-15 03:41:48 -0500 | 获得奖牌 | ● 学生 (source) |
2013-11-14 20:58:14 -0500 | 问了问题 | Neutron: VXLAN in Havana Has anyone been able to successfully run either OVS or ML2 with VXLAN networking and can help me understand what pitfalls I might run into in doing so? I have a stable/havana cluster that works great with GRE running in both OVS and ML2 (I've since switch to ML2, chasing issues in a deprecated plugin just seems like a fool's endeavour :-) I'm devstack based and basically follow the directions on the ML2 wiki I see the usual br-tun, br-int, and associated vxlan port on both controller and compute node, e.g., This mirrors what I would see for GRE (which works) except of course the interface type on the tunnel port would be "gre". As you can see, I'm using OVS 1.10.2 which has VXLAN support (bigger picture is I am based on Ubuntu 13.10 saucy which supplies that version of OVS). I can fire up VMs and see VMs on the compute node try to get dhcp via the console log and fail, and I also see vxlan encapsulated packets arriving on the controller from the compute node during the process. I thought maybe there might be an MTU issue after poking around on google for discussions of VXLAN and openvswitch/ml2/neutron, but inspecting interfaces (both ovs and physical) I see no indication of packet drops (at least, no reports if packet loss using ipconfig or ovs-dpctl) on either the compute or controller node (and I am guessing if MTU were an issue, I'd be seeing packet drops (right?)). I'm still getting my feet wet with VXLAN and I'm also still learning about Linux networking, but have for the better part of the year had little problem with GRE in folsom, grizzly, and (now) havana, so thinking I'm just missing something that may be obvious to others regarding VXLAN - clearly just telling ML2 that I want to use VXLAN isn't sufficient. ovs-vsctl from the controller: (more) |
2013-11-14 20:37:20 -0500 | 回答问题 | ML2 neutron plugin installation and configuration With devstack, it was pretty easy (and since it worked, I feel justified in posting the answer for those using devstack). Just added Q_PLUGIN=ml2 to the localrc, replacing the previous value of Q_PLUGIN (for me it was openvswitch). GRE worked with only that change. VXLAN is not working for me, after setting: Q_ML2_TENANT_NETWORK_TYPE=vxlan I'm still trying to track that down. Could be some issue with fragementation/MTUs. I see vxlan packets flying around, but my guest VMs can't do dhcp and thus come up with no IP address on the veth. More info on this page: https://wiki.openstack.org/wiki/Neutron/ML2 |