Ask Your Question

awasi's profile - activity

2014-04-28 18:15:39 -0500 received badge  Famous Question (source)
2014-04-26 14:19:08 -0500 received badge  Scholar (source)
2014-04-26 08:46:02 -0500 answered a question dhcp worked but cannot ping or ssh to instance

Yup that did that trick. I completely forgot about the security group.

2014-04-26 08:09:15 -0500 received badge  Notable Question (source)
2014-04-26 04:09:55 -0500 received badge  Popular Question (source)
2014-04-26 03:00:09 -0500 received badge  Student (source)
2014-04-26 01:33:38 -0500 received badge  Editor (source)
2014-04-26 00:50:45 -0500 asked a question dhcp worked but cannot ping or ssh to instance

Hi Folks,

Just recently got icehouse up and running with ML2 plugin. I have a 3 node cluster. Server, Compute and Network nodes. I am running ML2 with VLANs. Have multiple NICs in all three nodes. The problem I am running into, is that when the VM is up and running on the Compute node, the VM gets the DHCP IP address from the network services and comes up. I can assign the VM a floating IP. The problem is that I am not able to reach the VM e.g. the floating IP that is assigned to the VM is 192.168.1.71. I cannot ping the VM using this floating IP, and I cannot ssh to the VM. I confirmed by looking at the tcpdump that packets indeed are reaching the Compute via the Network Node. I can see the pings coming into the Compute for the VM IP address, but I don't see any response back.

I am kind of perplexed that DHCP is working fine, but after that the VM goes quiet. Below is how my ML2 configuration looks like along with the ifconfig output.

This looks like a bug with Icehouse compute using ML2 and VLANs

Thanks in advance for any help

Atif….

root@compute1:/etc/neutron/plugins/ml2# more ml2_conf.ini
[ml2]
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
type_drivers = vlan

# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# mechanism_drivers =
mechanism drivers = openvswitch

[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
network_vlan_ranges = default:1000:2000

[securitygroup]
# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

[ovs]
bridge_mappings = default:br-eth1

root@compute1:/etc/neutron/plugins/ml2#

root@compute1:~# ifconfig -a
br-eth1   Link encap:Ethernet  HWaddr f0:1f:af:e8:c0:0c  
          inet6 addr: fe80::24b7:12ff:fe17:759d/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:36 errors:0 dropped:0 overruns:0 frame:0
          TX packets:65 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:3688 (3.6 KB)  TX bytes:14898 (14.8 KB)

br-int    Link encap:Ethernet  HWaddr 3e:79:be:0d:5b:43  
          inet6 addr: fe80::2c29:27ff:fe44:edd4/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:48 errors:0 dropped:0 overruns:0 frame:0
          TX packets:65 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:4672 (4.6 KB)  TX bytes:14841 (14.8 KB)

eth0      Link encap:Ethernet  HWaddr f0:1f:af:e8:c0:0b  
          inet addr:192.168.1.61  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr ...
(more)