Ask Your Question

Assaf Muller's profile - activity

2016-01-12 02:01:01 -0500 commented question Not able to ssh into RDO Kilo instance

It looks like the instance itself did get an IP, so this boils it down to an issue between either you and the router, or the router and the instance. There's some tips in a talk I gave a while ago: http://assafmuller.com/2015/08/31/neu.... I hope this helps, keep us updated :)

2016-01-12 02:01:01 -0500 commented question Freeing IP from FLAT network setup

When you delete a VM it will free up its IP.

For example:

  • Create VM 1 (Gets IP 1)
  • Create VM 2 (Gets IP 2)
  • Delete VM 1 (Reclaims IP 1)
  • Create VM 3 (Gets IP 3, but still IP 1 is available)
  • Create VMs 4 to 10, they get IPs 4 to 10
  • Create VM 11, it will get IP 1
2016-01-12 02:01:00 -0500 commented question Routing between two tenants

What about security groups? When you create network A, B or C, you also put all of its ports in a security group for that network. This means that A, B, and C won't be able to communicate. When you create SG {A,B,C} allow incoming traffic from the security group for network 'INFRA', and vice versa.

2016-01-11 20:55:42 -0500 received badge  Commentator
2015-12-02 16:32:50 -0500 received badge  Nice Answer (source)
2014-11-09 06:14:27 -0500 commented question How to enable access between tenants?

Why different tenants? Are these different server groups forming the same app? If so they should be in the same tenant, but in different networks, with security groups to define the allowed traffic between the networks. Different tenants would communicate via the external network.

2014-08-06 02:29:43 -0500 received badge  Nice Answer (source)
2014-06-24 12:56:01 -0500 commented question duplicated IPs in dnsmasq host file

Are you using Icehouse? If so, did you disable Neutron's notifications to Nova?

Other than that, generally speaking, these sort of things happen when Neutron hits 100% CPU usage. You can consider enabling api_workers and rpc_workers in neutron.conf which should solve that issue.

2014-06-24 12:53:40 -0500 answered a question Multiple metadata and DHCP agents

Yes, it should be possible.

As for special configuration, you'll need to enable enable_isolated_metadata in the dhcp agent ini.

You can also consider playing around with 'dhcp_agents_per_network' in neutron.conf. This will cause the Neutron controller to schedule each network on N dhcp agents, instead of just 1 (The default). Unless you change that value, the scheduling of networks on DHCP agents will mean that all DHCP requests for a single network will be answered by a single node. You can set dhcp_agents_per_network to the number of compute nodes, meaning that each node will answer its own DHCP requests.

2014-05-04 02:39:57 -0500 answered a question devstack: how to bridge VMs directly to external network using quantum

Here are the instructions: http://trickycloud.wordpress.com/2013...

The gist is that you'll configuring all compute nodes with a new bridge, connected to the physical NIC of the pre-existing network you'd like to use. You'll then configure the OVS agent(s) to know about this new bridge. Then you'll create a new provider network, which is mapped to the physical network you'd like your VMs to connect to. Connect your VMs to this new Neutron provider network, and you're done.

2014-05-04 02:28:09 -0500 commented question Devstack vm on compute node does not get dhcp

I'd take a look at the ovs-agent log on the compute node. I'd guess there's errors there.

2014-05-04 02:24:50 -0500 commented question getting an agent error when creating an instance

It seems like the instance build failed, then a Nova notification was sent to clean up the instance and that failed as well. More importantly though - I think it's a safe bet that there's an earlier error in the log that explains why the instance build failed.

2014-05-01 02:58:59 -0500 commented question Instance not taking IP form The DHCP and Gateway router external port down?

Could you paste: ovs-vsctl show

Interested in the VLAN tag on the taps on the network node and compute nodes. If the VLAN tags are 4095, that's a signal for an error and I'd expect errors in the OVS agent log as well.

2014-05-01 02:56:46 -0500 commented question packstack installation ends up with neutron agent intermittently on and off.

Can you paste the neutron.conf on all nodes? Specifically looking for agent_down_time on the controller node, and report_interval on the compute nodes. Just a hunch: But you might want to increase the agent_down_time and/or decrease the agent report_interval.

2014-04-29 02:22:24 -0500 received badge  Nice Answer (source)
2014-04-25 12:31:39 -0500 received badge  Nice Answer (source)
2014-04-25 08:34:25 -0500 commented question How to setup neutron with provider network?

The first two warnings (About no DHCP agent for this network during the creation of the network) are fine. It only means that the network wasn't scheduled to a DHCP agent. It's scheduled during the first port creation on that network.

As for the port binding fail - That's the ticket. There must be something wrong in the configuration files, as the port of the VM isn't bound successfully to the L2 agent. The VLAN tag of 4095 on the 'qvo' device (Which is eventually connected to the VM's tap device) is an error-state VLAN.

You can take a look at the OVS agent log to hopefully find the root cause of the issue. My bet is on a configuration issue with the plugin.ini in either the controller node or the compute node, or both.

2014-04-25 08:24:09 -0500 answered a question unable to unshare external network

Not at this time.

There's an open bug: https://bugs.launchpad.net/neutron/+b...

And a patch in review: https://review.openstack.org/#/c/82352/

This is targeted for the Juno release and may be backported to Icehouse if deemed important enough.

2014-04-25 08:21:05 -0500 answered a question Instances alternative gateway / VLAN tags corresponding datacenter VLANs

Your instances would need network connectivity to your physical router in layer 2. If using VLAN tenant networks (From question 2 I assume this is the case), you'd create provider networks with specific VLANs. These VLANs should be trunked by your switches, and known by the physical router.

For example: neutron net-create net1 --provider:network_type=vlan --provider:physical_network:physnet1 --provider:segmentation_id:100

  • physnet1 is the name of the physical network configured in plugin.ini of your controller and l2 agents. Make sure that physnet1 is then connected to a bridge that is then connected to the correct physical datacenter network, that the router is also connected to (Assuming OVS agents here)
  • segmentation_id of 100 in this case would mean that the new "net1" network would be a VLAN network with a VLAN tag of 100
  • You would then hook up your instances to this network
  • Make sure that VLAN 100 is being trunked by your physical switches
  • In the subnet configuration for "net1" configure your physical router as the gateway
  • In your router, configure a sub-interface for VLAN 100 and set it with the correct IP (Or the equivalent configuration if using L3 switches or whatnot)
2014-04-25 07:52:02 -0500 received badge  Necromancer (source)
2014-04-25 07:52:02 -0500 received badge  Teacher (source)
2014-04-25 07:47:07 -0500 received badge  Supporter (source)
2014-04-25 07:47:03 -0500 answered a question Is there a way to add a route to a virtual router?

Yes, via the extraroutes extension: http://docs.openstack.org/api/opensta...

And via the CLI:

neutron router-update [router_name] --routes type=dict list=true destination=[cidr],nexthop=[ip address]

For example:

neutron router-update router1 --routes type=dict list=true destination=40.0.1.0/24,nexthop=10.0.0.3

2014-04-25 07:25:42 -0500 received badge  Famous Question (source)
2014-04-25 07:25:42 -0500 received badge  Notable Question (source)
2013-12-19 16:10:29 -0500 received badge  Popular Question (source)
2013-12-19 05:46:25 -0500 asked a question New guide to set up VXLAN tenant networks

There is a new guide available on the RDO website regarding the use of VXLAN tenant networks: