Ask Your Question

Emad's profile - activity

2016-07-11 15:09:06 -0600 received badge  Famous Question (source)
2016-03-09 06:13:34 -0600 received badge  Notable Question (source)
2016-02-10 05:23:53 -0600 received badge  Famous Question (source)
2015-12-28 21:14:28 -0600 received badge  Notable Question (source)
2015-12-09 17:47:39 -0600 received badge  Teacher (source)
2015-12-09 17:47:39 -0600 received badge  Self-Learner (source)
2015-12-09 08:48:14 -0600 received badge  Popular Question (source)
2015-12-09 03:39:12 -0600 commented question Icehouse to create external net with ODL

did you manage to solve the issue? I have the same problem

2015-12-08 11:05:36 -0600 asked a question net-create mechanism driver error

I just integrated openstack with opendaylight, however after neutron net-create ext-net I get mechanism driver error.

controller@controller-std:~/networking-odl$ neutron net-create ext-net
500-{u'NeutronError': {u'message': u'create_network_postcommit failed.', u'type': u'MechanismDriverError', u'detail': u''}}

Do I need to install networking_odl package? Because I remember I needed this for devstack. If I need this package then, for stable/icehouse I get the following error when sudo python setup.py install:

controller@controller-std:~/networking-odl$ sudo python setup.py install
ERROR:root:Error parsing
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/pbr/core.py", line 109, in pbr
    attrs = util.cfg_to_args(path)
  File "/usr/local/lib/python2.7/dist-packages/pbr/util.py", line 230, in cfg_to_args
    hook_fn = resolve_name(hook)
  File "/usr/local/lib/python2.7/dist-packages/pbr/util.py", line 185, in resolve_name
    raise ImportError(name)
ImportError: neutron.hooks.setup_hook
error in setup command: Error parsing /home/controller/networking-odl/setup.cfg: ImportError: neutron.hooks.setup_hook

And here is neutron log:

2015-12-09 15:46:08.410 1188 INFO neutron.common.config [-] Logging enabled!
2015-12-09 15:46:08.413 1188 INFO neutron.common.config [-] Config paste file: /etc/neutron/api-paste.ini
2015-12-09 15:46:08.455 1188 INFO neutron.manager [-] Loading core plugin: neutron.plugins.ml2.plugin.Ml2Plugin
2015-12-09 15:46:08.575 1188 INFO neutron.plugins.ml2.managers [-] Configured type driver names: ['flat', 'gre']
2015-12-09 15:46:08.634 1188 INFO neutron.plugins.ml2.drivers.type_flat [-] Allowable flat physical_network names: []
2015-12-09 15:46:08.636 1188 INFO neutron.plugins.ml2.managers [-] Loaded type driver names: ['flat', 'gre']
2015-12-09 15:46:08.636 1188 INFO neutron.plugins.ml2.managers [-] Registered types: ['flat', 'gre']
2015-12-09 15:46:08.636 1188 INFO neutron.plugins.ml2.managers [-] Tenant network_types: ['gre']
2015-12-09 15:46:08.637 1188 INFO neutron.plugins.ml2.managers [-] Configured mechanism driver names: ['opendaylight']
2015-12-09 15:46:08.637 1188 ERROR stevedore.extension [-] Could not load 'opendaylight': No module named opendaylight.driver
2015-12-09 15:46:08.637 1188 ERROR stevedore.extension [-] No module named opendaylight.driver
2015-12-09 15:46:08.637 1188 TRACE stevedore.extension Traceback (most recent call last):
2015-12-09 15:46:08.637 1188 TRACE stevedore.extension   File "/usr/lib/python2.7/dist-packages/stevedore/extension.py", line 162, in _load_plugins
2015-12-09 15:46:08.637 1188 TRACE stevedore.extension     verify_requirements,
2015-12-09 15:46:08.637 1188 TRACE stevedore.extension   File "/usr/lib/python2.7/dist-packages/stevedore/named.py", line 123, in _load_one_plugin
2015-12-09 15:46:08.637 1188 TRACE stevedore.extension     verify_requirements,
2015-12-09 15:46:08.637 1188 TRACE stevedore.extension   File "/usr/lib/python2.7/dist-packages/stevedore/extension.py", line 177, in _load_one_plugin
2015-12-09 15:46:08.637 1188 TRACE stevedore.extension     plugin = ep.load(require=verify_requirements)
2015-12-09 15:46:08.637 1188 TRACE stevedore.extension   File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2088, in load
2015-12-09 15:46:08.637 1188 TRACE stevedore.extension     entry = __import__(self.module_name, globals(),globals(), ['__name__'])
2015-12-09 15:46:08.637 1188 TRACE stevedore.extension ImportError: No ...
(more)
2015-12-07 09:56:29 -0600 received badge  Student (source)
2015-12-07 05:45:08 -0600 answered a question docker ps -a does not show anything using nova-docker

Solved the problem, in icehouse instead of modifying /etc/nova/nova.conf you should modify /etc/nova/nova-compute.conf to change the compute_driver, and comment out:

# [libvirt]
# virt_type=kvm
2015-12-07 03:26:48 -0600 asked a question docker ps -a does not show anything using nova-docker

Hi,

I just installed nova-docker on my compute node, however after booting a docker container, docker ps -a does not show anything, and of course the container is not pingable. However nova list shows that the container is running. Here is my nova.conf on the compute node:

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata

compute_driver = novadocker.virt.docker.DockerDriver

network_api_class = nova.network.neutronv2.api.API
neutron_url = http://controller-std:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = NEUTRON_PASS
neutron_admin_auth_url = http://controller-std:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

auth_strategy = keystone

rpc_backend = rabbit
rabbit_host = controller-std
rabbit_password = RABBIT_PASS

my_ip = 10.0.0.31
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.31
novncproxy_base_url = http://controller-std:6080/vnc_auto.html

glance_host = controller-std

[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://nova:NOVA_DBPASS@controller-std/nova

[keystone_authtoken]
auth_uri = http://controller-std:5000
auth_host = controller-std
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS

And this is nova list on controller node:

controller@controller-std:~$ nova list
+--------------------------------------+----------------+---------+------------+-------------+-------------------------------------+
| ID                                   | Name           | Status  | Task State | Power State | Networks                            |
+--------------------------------------+----------------+---------+------------+-------------+-------------------------------------+
| ce6fbd83-953f-427d-a794-750392d8a8e4 | demo-docker    | ACTIVE  | -          | Running     | demo-net=192.168.1.9                |
| 97f85631-cf9e-476d-bb1b-63fbe1e9de02 | demo-instance1 | SHUTOFF | -          | Shutdown    | demo-net=192.168.1.2, 203.0.113.102 |
| 20cc943f-07be-462b-af1b-fa358a47bf9e | demo-instance2 | SHUTOFF | -          | Shutdown    | demo-net=192.168.1.4, 203.0.113.103 |
| 15bfcc13-a85c-4906-9db9-5ae50df99d33 | demo-instance3 | SHUTOFF | -          | Shutdown    | demo-net=192.168.1.5, 203.0.113.104 |
+--------------------------------------+----------------+---------+------------+-------------+-------------------------------------+

P.S.: I am using openstack icehouse, and not using heat.

2015-12-05 12:43:22 -0600 commented answer cannot ping 203.0.113.101 icehouse

Yeah I can do that, I live in the country on your left :D

2015-12-05 12:33:50 -0600 commented answer cannot ping 203.0.113.101 icehouse

Okay thank you so much, without people like you I dont know how I can survive. I really appreciate it.

Cheers

2015-12-05 12:28:19 -0600 received badge  Supporter (source)
2015-12-05 12:25:47 -0600 commented answer cannot ping 203.0.113.101 icehouse

And I cant ping from network node, 192.168.1.X for instances.

2015-12-05 12:20:36 -0600 commented answer cannot ping 203.0.113.101 icehouse

perhaps I should put higher than 107 for the br-ex because I guess if I create more instances, when it reaches 107 again something will go wrong, isn't it?

2015-12-05 12:20:28 -0600 commented answer cannot ping 203.0.113.101 icehouse

okay now im able to ping 203.0.113.X without qrouter, is qg-xxxxxx external subnet?!

2015-12-05 12:09:13 -0600 commented answer cannot ping 203.0.113.101 icehouse

You mean I change br-ex IP in /etc/network/interfaces?

2015-12-05 11:41:41 -0600 commented answer cannot ping 203.0.113.101 icehouse

through qrouter everything is fine, through instance also everything is fine, however from network node I cannot ping floating IPs, I don't know why

2015-12-05 11:40:28 -0600 commented answer cannot ping 203.0.113.101 icehouse

If I ping through qrouter on Network Node, and run tcpdump on compute node:

19:37:36.144199 IP 203.0.113.102 > 192.168.1.2: ICMP echo request, id 4984, seq 1, length 64
19:37:36.145220 IP 192.168.1.2 > 203.0.113.102: ICMP echo reply, id 4984, seq 1, length 64
2015-12-05 11:13:34 -0600 marked best answer cannot ping 203.0.113.101 icehouse

Hi

I followed OpenStack icehouse installation guide (http://docs.openstack.org/icehouse/install-guide/install/apt/openstack-install-guide-apt-icehouse.pdf (here)), However I am not able to ping 203.0.113.101

This is my Controller ifconfig:

eth0      Link encap:Ethernet  HWaddr 52:54:00:2d:d9:90  
          inet addr:10.0.0.11  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe2d:d990/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:15201 errors:0 dropped:1034 overruns:0 frame:0
          TX packets:10282 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:17043137 (17.0 MB)  TX bytes:1660709 (1.6 MB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:38658 errors:0 dropped:0 overruns:0 frame:0
          TX packets:38658 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:8819118 (8.8 MB)  TX bytes:8819118 (8.8 MB)

controller /etc/network/intefaces:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
up ifconfig eth0 promisc
address 10.0.0.11
netmask 255.255.255.0
network 10.0.0.0
gateway 10.0.0.1
broadcast 10.0.0.255
dns-nameservers 8.8.8.8

Network node ifconfig:

br-ex     Link encap:Ethernet  HWaddr d2:a0:4e:2d:c5:4d  
          inet6 addr: fe80::f843:8aff:fe2f:bd6/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1156 (1.1 KB)  TX bytes:648 (648.0 B)

br-int    Link encap:Ethernet  HWaddr b6:fa:c8:a4:32:48  
          inet6 addr: fe80::8460:15ff:fe98:f0eb/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:21 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1722 (1.7 KB)  TX bytes:648 (648.0 B)

br-tun    Link encap:Ethernet  HWaddr d2:db:b4:d3:46:4e  
          inet6 addr: fe80::68ca:15ff:fe84:8394/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

eth0      Link encap:Ethernet  HWaddr 52:54:00:3c:6e:b2  
          inet addr:10.0.0.21  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe3c:6eb2/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:5076 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4343 errors:0 dropped:0 ...
(more)