Ask Your Question

vthapar's profile - activity

2016-04-05 02:35:49 -0500 received badge  Nice Answer (source)
2015-07-15 21:59:48 -0500 received badge  Famous Question (source)
2015-07-15 21:59:48 -0500 received badge  Notable Question (source)
2015-02-09 04:11:16 -0500 commented answer What is expected behavior on setting admin_state_up=false for neutron ports?

Thank you for the follow up. There is no agent on compute node when ODL is used, so that could be why it never goes 'down'. Will raise a bug with ODL.

2015-02-09 01:36:33 -0500 received badge  Popular Question (source)
2015-02-02 23:05:15 -0500 commented answer What is expected behavior on setting admin_state_up=false for neutron ports?

Thanks for the reply. Any idea which part of code/module sets the physical status as down for OVS Neutron Agent setup? Would it be nova-compute, OVS Agent or something else? This would help figure out if bug is in Neutron or ODL code.

2015-02-02 08:01:52 -0500 asked a question What is expected behavior on setting admin_state_up=false for neutron ports?

I have an ODL [compiled off master] + Openstack [stable/juno devstack] setup. I am able to bring up VMs. When I set admin_state of neutron port in-use by VM to false, VM can still ping and OVS also shows link and admin state of OF Port as up.

Is this the expected behavior or bug with ODL implementation?

2014-12-03 23:53:57 -0500 commented answer Can we create only 4094 networks using OVS?

One thing to remember is what you mean by "Network Node" - node where L3-agent [and DHCP too, I believe] is running. Lookup L3 agent rescehduler for information on how to run multiple agents on multiple nodes. However, I'd suggest looking into DVR as a much better alternative if you're on Juno.

2014-12-01 03:49:54 -0500 received badge  Commentator
2014-12-01 03:49:54 -0500 commented answer Can we create only 4094 networks using OVS?

I believe original question is not how VNI or GRE ID greater than 4094 will be supported [explained by scope local] , but how will one node support more than 4094 networks when each network maps to a VLANID. Local VLAN means a VNI 10001 can map to VID 1 on one node and VID 101 on another node.

2014-12-01 01:56:28 -0500 answered a question Can we create only 4094 networks using OVS?
  1. Using VxLan/GRE you can have more than 4094 networks, but still limited by 4094 per node. This is because VNI or GRE Tunnel ID is internally converted to a local scope VLAN ID.
  2. This limit means on one compute node you cannot have more than 4094 unique networks.
  3. This also means you can't have more than 4094 unique networks on single Network Node [I believe this is what you call Neutron Node in question]
  4. But you CAN have multiple Compute and Network nodes.

Also, I am assuming Neutron Node you mean node running L3 Agent. Let me know if am wrong in that. Hope this answers your question.

2014-11-16 23:26:35 -0500 commented question Improving neutron openvswitch performance

I've seen significant improvement in performance by increasing the MTU on VMs, you could try that. Refer this linkForcing MTU... on how to set MTU on VMs using dnsmasq.

2014-11-07 00:00:29 -0500 answered a question neutron agent-list not showing on controller node

Check the logs for each of the agents on network node. One common mistake I've seen is starting a separate rabbitmq on network node and point agents to that. So, check neutron.conf on Network Node and make sure it is pointing to your controller node [assuming that is where your rabbitmq is running].

2014-11-06 04:13:33 -0500 received badge  Enthusiast
2014-11-04 00:39:07 -0500 commented answer Openstack juno how to configure dvr in network-node and compute-node

Add to ml2_conf.ini as follows

[agent]
l2_population = True
tunnel_types = vxlan
enable_distributed_routing = True

ml2_conf.ini or ovs_netron_pllugin.ini depends on what plugin you've configured as core_plugin in neutron.conf

2014-11-04 00:30:52 -0500 commented answer How to load entry point for new ML2 driver plugin ?

How did you get neutron code to write the plugin? If you cloned it off repo then it should be present in your root neutron directory.

https://github.com/openstack/neutron/...

2014-10-22 07:26:01 -0500 received badge  Teacher (source)
2014-10-22 00:42:31 -0500 commented answer In ml2_conf.ini file, what does it mean when the attribute “tenant_network_types” is set to multiple values?

Yes, it seems confusing at first but this is the way I look at it:

tenant_network_types specifies what sort of provider networks are available for tenant to use. provider options specify which of the available types to actually use for this particular tenant network.

2014-10-21 10:05:21 -0500 commented question How do I extend a subnet allocation pool in neutron

I believe this will work only with nova-network not neutron, right?

2014-10-21 10:02:16 -0500 commented answer How do I extend a subnet allocation pool in neutron

Which version of openstack and neutron client you're using? I tried with neutron client version 2.3.10 and it worked for me.

2014-10-21 09:46:43 -0500 answered a question How do I extend a subnet allocation pool in neutron

You can use neutron subnet-update.

Let's say you want to add 10.10.12.200 to 10.10.12.210 and 10.10.12.220 - 10.10.12.230 as two non-contingous ranges to existing one. For your subnet this is how you'll do it:

neutron subnet-update ext-subnet --allocation-pools start=10.10.12.200,end=10.10.12.210 --allocation-pools start=10.10.12.220,end=10.10.12.230

Note that this will only work with Juno or later as updating allocation_pools was fixed in Juno.

2014-10-21 09:15:52 -0500 answered a question neutron-openvswitch port - how to disable source IP address checking?

I am not sure if you can disable it without disabling securitygroups/port-security altogether but allowed-address-pairs would be able to solve your problem.

From the manual:

The allowed address pair extension extends the port attribute to enable you to specify arbitrary mac_address/ip_address(cidr) pairs that are allowed to pass through a port regardless of the subnet associated with the network.

Refer this link for details on how to configure allowed address-pairs:

http://docs.openstack.org/admin-guide-cloud/content/section_allowed_address_pairs_workflow.html (http://docs.openstack.org/admin-guide...)

2014-10-21 09:05:30 -0500 answered a question In ml2_conf.ini file, what does it mean when the attribute “tenant_network_types” is set to multiple values?

Yes, it is possible. You use the --provider option for neutron net-create. Let us say you want VM1 of tenant1 to use VLAN 100 and VM1 of tenat2 to use GRE Tunnel ID 1000:

neutron net-create vlan-net --provider:network_type vlan --provider:segmentation-id 100
neutron net-create gre-net --provider:network_type gre --provider:segmentation-id 1000

Note that I skipped other fields in net-create like tenant-id etc. but you should already know how to use those.

2014-10-21 00:16:13 -0500 received badge  Supporter (source)
2014-10-21 00:14:43 -0500 answered a question Is it possible to move a port from one bridge to another?

There is no 'ovs-vsctl move' command. You'll have to do delete followed by add. i.e. 'ovs-vsctl del-port from_bridge port' followed by 'ovs-vsctl add-port to_bridge port'.

2014-10-20 14:10:06 -0500 answered a question How to load entry point for new ML2 driver plugin ?

You're writing a new plugin or ML2 mechanism driver or type driver? For each of these you need to add an entry in neutron/setup.cfg under [entry_points] to relevant section. [entry_points] neutron.core_plugins for a new core/monolithic plugins. neutron.service_plugins for service plugins. neutron.ml2.type_drivers for ML2 type drivers neutron.ml2.mechanism_drivers for ML2 mechanism drivers

Once your changes to setup.cfg are committed they'll be available in default neutron distribution.

Happy coding.

2014-10-20 14:08:03 -0500 commented answer what is meant by monolithic plugin

What it means is that you can use multiple mech drivers at same time. The way ML2 code works is it calls every mechanism driver configured for every method. So, lets say you have openvswitch and linuxbridge configured as mechanism driver, they both will get called for every port-create.

2014-10-20 06:29:47 -0500 commented answer what is meant by monolithic plugin

Yes, you can have multiple mechanism drivers. OpenVSwitch and LinuxBridge are commonly used together, but they don't work the way you seem to suggest [some ports created by OVS, some by LB].

2014-10-20 06:29:46 -0500 commented question not create a listening tcp socket on the port 9696, not connection to neutron

rabbitmq/qpid is not running, assuming you're running a single node setup. If multinode, check where you're running rabbitmq/qpid and make sure rabbit_host is properly configured in neutron.conf

2014-10-20 02:00:08 -0500 answered a question neutron net-create :unable to establish connection to http://controller:9696/v2.0/networks.json

Are you using a proxy? If you are, you need to add Neutron server ip to no_proxy list in environment variables. I frequently get this error when on a new deployment I forget to add it to no_proxy.

2014-10-20 02:00:03 -0500 answered a question No br-int for compute node

What version of Openstack and ODL are you running? Have you enabled odl-compute in your compute node local.conf? In Hydrogen, Neutron plugin in ODL adds and configures the bridges. If you're using Hydrogen, look at ODL logs.

2014-10-20 02:00:03 -0500 answered a question Openstack juno how to configure dvr in network-node and compute-node

Just this option alone is not enough. Refer to this link for detailed information:

wiki.openstack.org/wiki/Neutron/DVR/HowTo

You need to configure l3_agent.ini and ml2_conf.ini too.