Ask Your Question

eupub's profile - activity

2019-11-30 07:26:29 -0500 received badge  Famous Question (source)
2018-10-30 11:35:51 -0500 received badge  Famous Question (source)
2018-10-12 18:26:05 -0500 received badge  Notable Question (source)
2018-10-11 11:30:22 -0500 received badge  Popular Question (source)
2018-10-09 21:12:47 -0500 asked a question ubuntu 18.04 rocky nova-compute cannot install unmet dependencies

Hi Experts, I am following the install guide to install rock nova-compute (on ubuntu 18.04) but apt-get returns error of unmet dependencies - can anyone help me fix this?!

Thanks!

Regards, Boon Lee

apt install nova-compute

Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:

The following packages have unmet dependencies: nova-compute : Depends: python-nova (= 2:18.0.1-0ubuntu1~cloud0) but it is not going to be installed or python3-nova (= 2:18.0.1-0ubuntu1~cloud0) but it is not going to be installed E: Unable to correct problems, you have held broken packages.

lsb_release -a

No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename: bionic

apt-get update

Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease Get:2 http://archive.ubuntu.com/ubuntu bionic-security InRelease [83.2 kB] Ign:3 http://ubuntu-cloud.archive.canonical... bionic-updates/rocky InRelease Get:4 http://ubuntu-cloud.archive.canonical... bionic-updates/rocky Release [7,879 B] Get:5 http://ubuntu-cloud.archive.canonical... bionic-updates/rocky Release.gpg [543 B] Get:6 http://ubuntu-cloud.archive.canonical... bionic-updates/rocky/main amd64 Packages [113 kB] Get:7 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] Get:8 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [399 kB] Get:9 http://archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [149 kB] Fetched 841 kB in 57s (14.8 kB/s) Reading package lists... Done

apt-get dist-upgrade

Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

2016-08-21 20:35:20 -0500 received badge  Notable Question (source)
2016-07-29 15:18:07 -0500 received badge  Popular Question (source)
2016-06-20 01:41:16 -0500 asked a question mitaka libreswan change ipsec.secrets file not allowed

Hi Experts,

I am running Mitaka on CentOS7 and trying to setup VPNaaS (using libreswan) and I keep getting this error about chown operation not permitted to ipsec.secrets file. It seems the rootwrap/vpnaas.filters and libreswan_ipsec.py are not effected?

Please advise & Thks!

Regards, Boon Lee

ls -l /var/lib/neutron/ipsec/9aea0cff-e830-4a20-b3f0-33acc462fd6f/etc/

total 12 -rw-r--r--. 1 neutron neutron 1898 Jun 20 14:23 ipsec.conf

drwxr-xr-x. 11 neutron neutron 4096 Jun 20 14:18 ipsec.d

-rw-------. 1 neutron neutron 82 Jun 20 14:23 ipsec.secrets

drwxr-xr-x. 3 neutron neutron 18 Jun 20 14:18 pki

vpn-agent.log:

2016-06-20 14:23:40.502 13103 ERROR neutron.agent.linux.utils [-] Exit code: 1; Stdin: ; Stdout: ; Stderr: chown: changing ownership of ‘/var/lib/neutron/ipsec/9aea0cff-e830-4a20-b3f0-33acc462fd6f/etc/ipsec.secrets’: Operation not permitted

2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec [-] Failed to enable vpn process on router 9aea0cff-e830-4a20-b3f0-33acc462fd6f 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec Traceback (most recent call last): 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py", line 289, in enable 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec self.ensure_configs() 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/libreswan_ipsec.py", line 51, in ensure_configs 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec secrets_file]) 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py", line 396, in _execute 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec extra_ok_codes=extra_ok_codes) 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 927, in execute 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec log_fail_as_error=log_fail_as_error, **kwargs) 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 140, in execute 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec raise RuntimeError(msg) 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec RuntimeError: Exit code: 1; Stdin: ; Stdout: ; Stderr: chown: changing ownership of \u2018/var/lib/neutron/ipsec/9aea0cff-e830-4a20-b3f0-33acc462fd6f/etc/ipsec.secrets\u2019: Operation not permitted 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec 2016-06-20 14:23:40.503 13103 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec

cat /usr/share/neutron/rootwrap/vpnaas.filters

[Filters]

ip: IpFilter, ip, root

ip_exec: IpNetnsExecFilter, ip, root

ipsec: CommandFilter, ipsec, root

strongswan: CommandFilter, strongswan, root

neutron_netns_wrapper: CommandFilter, neutron-vpn-netns-wrapper, root

neutron_netns_wrapper_local: CommandFilter, /usr/local/bin/neutron-vpn-netns-wrapper, root

chown: RegExpFilter, chown, root, chown, --from=., root.root, ./ipsec.secrets

rpm -qa |grep libreswan

libreswan-3.15-5.el7_1.x86_64

rpm -qa |grep neutron

openstack-neutron-openvswitch-8.1 ... (more)

2016-03-01 11:31:20 -0500 received badge  Famous Question (source)
2015-09-25 20:58:47 -0500 received badge  Popular Question (source)
2015-09-25 20:58:47 -0500 received badge  Notable Question (source)
2015-09-13 08:57:11 -0500 asked a question How can I install HEAT software config agents/scripts manually on my image

Hi Experts,

I am trying out HEAT autoscaling using Software Configuration - this requires my image to be installed with agents and hook scripts. I understand that diskimage-builder is recommended way to do this but it doesn't seem to work for my case because my image is installed with an app that I have no control on. Diskimage-builder doesn't seem to keep my application intact after it's work (maybe I am not doing something right). Anyway, how can I install the heat software config hook scripts (heat-config, heat-config-script by diskimage-builder) manually on my image? I have managed to install manually on my image os-apply-config, os-collect-config and os-refresh-config packages.

Appreciate any advise please & Thks!

Regards, Boon Lee

2015-09-11 12:29:13 -0500 received badge  Famous Question (source)
2015-09-10 13:23:02 -0500 received badge  Notable Question (source)
2015-09-10 08:03:02 -0500 received badge  Popular Question (source)
2015-09-10 02:27:38 -0500 asked a question How to gracefully terminate instance in Openstack?

Hi Experts,

Is it possible to gracefully terminate instances in Openstack(Juno)? I need the instance to gracefully shutdown in OS (to run some shutdown scripts) before being finally deleted. Currently the behavior is that the instance being abruptly deleted. Or are there ways to workaround this?

Please advise!

Regards, Boon Lee

2014-12-22 05:58:55 -0500 received badge  Popular Question (source)
2014-12-22 05:58:55 -0500 received badge  Famous Question (source)
2014-12-22 05:58:55 -0500 received badge  Notable Question (source)
2014-10-28 17:43:52 -0500 received badge  Famous Question (source)
2014-10-20 08:33:14 -0500 asked a question neutron-openvswitch port - how to disable source IP address checking?

Hi there,

I have implemented icehouse using neutron, openvswitch with ml2. I have deployed a VM acting as VPN client (communicating with VPN server outside Openstack) to route traffic to/from VPN server from other subnets outside of Openstack, and it seems neutron openvswitch doesn't allow packets of different source IP addresses to be routed out of the VM's port other than it's own. Is there a way to disable this behavior?

It seems source IP address is tied to MAC address:

service iptables status

Chain neutron-openvswi-s1eaac794-3 (1 references)

num target prot opt source destination

1 RETURN all -- 172.16.0.48 0.0.0.0/0 MAC FA:16:3E:A0:24:7F

2 DROP all -- 0.0.0.0/0 0.0.0.0/0

There are many drop packets because the source IP address was not 172.16.0.48:

iptables -L -n -v

Chain neutron-openvswi-s1eaac794-3 (1 references)

pkts bytes target prot opt in out source destination

0     0 RETURN     all  --  *      *       172.16.0.48          0.0.0.0/0           MAC FA:16:3E:A0:24:7F

20507 1723K DROP all -- * * 0.0.0.0/0 0.0.0.0/0

I tried around with rules in security groups to allow all but doesn't seem to help at all. I would like the VM (VPN client) to be able to to route out traffic to other VMs on the same subnet (and security group) from different source IP addresses.

Appreciate any inputs soonest & Thks!

Regards, Boon Lee

2014-10-13 03:03:24 -0500 commented answer openstack-nova-compute - ERROR nova.openstack.common.threadgroup [-] Unexpected vif_type=binding_failed

I am seeing this in openvswitch-agent.log - ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-ed952ee6-11fc-4312-a6e0-5598953729db None] Cannot provision gre network for net-id=ad97603d-27a5-43dc-872b-b3fc0a4e55df - tunneling disabled

2014-10-12 20:11:38 -0500 received badge  Notable Question (source)
2014-10-10 00:52:23 -0500 commented answer openstack-nova-compute - ERROR nova.openstack.common.threadgroup [-] Unexpected vif_type=binding_failed

ovs-vsctl shows the ports already under br-int but then how come I am getting these port add failures in ovs-vswitchd.log? These instances were deployed before my upgrade...

2014-10-10 00:50:27 -0500 commented answer openstack-nova-compute - ERROR nova.openstack.common.threadgroup [-] Unexpected vif_type=binding_failed

neutron agent-list shows the agents are running. Restart all services multiple times, openstack-nova-compute simply refuses to start up. I did notices in ovs-vswitchd.log - a bunch of failed to add port messages: WARN|system@ovs-system: failed to add qvo1a5fea4a-6e as port: No such device

2014-10-10 00:45:58 -0500 received badge  Popular Question (source)
2014-10-10 00:45:45 -0500 received badge  Enthusiast
2014-10-09 10:01:52 -0500 asked a question openstack-nova-compute - ERROR nova.openstack.common.threadgroup [-] Unexpected vif_type=binding_failed

Hi there,

After upgrading (stop all services, yum upgrade, db sync) from older Icehouse build to latest Icehouse build, my compute node (specifically openstack-nova-compute) cannot be started. All other services at controller and network nodes seem to be running fine. And before upgrading, everything was working fine.

Appreciate any help please!

# rpm -qa | grep openstack-nova
openstack-nova-compute-2014.1.2-1.el6.noarch
openstack-nova-common-2014.1.2-1.el6.noarch

From compute.log:

2014-10-09 21:21:59.773 11811 ERROR nova.openstack.common.threadgroup [-] Unexpected vif_type=binding_failed
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 117, in wait
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup     x.wait()
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 49, in wait
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 173, in wait
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/eventlet/event.py", line 121, in wait
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 212, in main
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/openstack/common/service.py", line 486, in run_service
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup     service.start()
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/service.py", line 163, in start
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1044, in init_host
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup     self._init_instance(context, instance)
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 902, in _init_instance
2014-10-09 21:21:59.773 11811 TRACE nova.openstack.common.threadgroup     self.driver.plug_vifs(instance, net_info)
2014-10-09 21:21:59.773 11811 TRACE ...
(more)
2014-10-07 03:11:16 -0500 answered a question openstack glance db sync issue

I had this issue too and this is how I resolved it -

This has something to do with the PyCrypto module - explanation here: http://wiki.buici.com/xwiki/bin/view/...

To resolve this issue, you have to re-compile PyCryto from source: 1. Go to PyCryto source directory 2. export ac_cv_func_malloc_0_nonnull=yes 3. ./configure 4. python setup.py build 5. python setup.py install 6. Run db sync again.

Regards, Boon Lee

2014-09-29 23:20:26 -0500 received badge  Great Answer (source)
2014-09-03 17:17:29 -0500 received badge  Famous Question (source)
2014-08-11 14:42:58 -0500 received badge  Notable Question (source)
2014-06-25 05:56:36 -0500 received badge  Popular Question (source)
2014-06-06 02:56:02 -0500 answered a question icehouse security group not effective

Please ignore this - security group works after rebooting

Thks.

Regards, Boon Lee

2014-06-06 01:12:00 -0500 asked a question icehouse security group not effective

Hi there,

I have installed Icehouse on CentOS with Neutron & ML2 plugin for networking. Networking is working fine but I realized that my security groups & rules are totally not effective at all.

For example of HTTPS access to my instance (with an associated IP) - no matter what I do, remove all rules in default security group, or create a new security group with no rules that allow 443, or even remove security group entirely from the instance, I am still able to access my instance via HTTPS. In fact, it is always full access (icmp, ssh, etc) to the instance via the associated IP!

Appreciate any inputs please & Thks!

Regards, Boon Lee

[root@sgpcontroller neutron]# rpm -qa|grep openstack

openstack-nova-cert-2014.1-2.el6.noarch

openstack-selinux-0.1.3-2.el6ost.noarch

openstack-nova-novncproxy-2014.1-2.el6.noarch

python-django-openstack-auth-1.1.5-1.el6.noarch

openstack-cinder-2014.1-2.el6.noarch

openstack-nova-console-2014.1-2.el6.noarch

openstack-neutron-ml2-2014.1-11.el6.noarch

openstack-utils-2014.1-1.el6.noarch

openstack-keystone-2014.1-2.el6.noarch

openstack-glance-2014.1-2.el6.noarch

openstack-nova-common-2014.1-2.el6.noarch

openstack-nova-api-2014.1-2.el6.noarch

openstack-dashboard-2014.1-1.el6.noarch

openstack-nova-conductor-2014.1-2.el6.noarch

openstack-nova-scheduler-2014.1-2.el6.noarch

openstack-neutron-2014.1-11.el6.noarch

2014-05-22 06:46:33 -0500 received badge  Good Question (source)
2014-04-23 07:09:37 -0500 received badge  Favorite Question (source)
2014-04-20 12:41:16 -0500 received badge  Nice Question (source)
2014-04-18 14:07:38 -0500 received badge  Good Answer (source)
2014-04-17 16:12:19 -0500 received badge  Famous Question (source)
2014-04-15 12:16:14 -0500 received badge  Nice Answer (source)
2014-04-15 04:55:04 -0500 received badge  Self-Learner (source)
2014-04-15 04:55:04 -0500 received badge  Teacher (source)
2014-04-14 15:21:38 -0500 received badge  Student (source)
2014-04-13 19:58:04 -0500 answered a question icehouse - Virtual Interface creation failed

I fixed the problem by updating neutron.conf at controller to include neutron-nova interactions config which is missing at Open vSwitch OVS plugin - configure controller node

# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
  notify_nova_on_port_status_changes True
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
  notify_nova_on_port_data_changes True
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
  nova_url http://controller:8774/v2
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
  nova_admin_username nova
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
  nova_admin_tenant_id SERVICE_TENANT_ID
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
  nova_admin_password NOVA_PASS
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
  nova_admin_auth_url http://controller:35357/v2.0

After adding these, the problem went away. Thanks for the hint!

Regards, Boon Lee

2014-04-13 19:57:19 -0500 commented answer icehouse - Virtual Interface creation failed

I fixed the problem by updating neutron.conf at controller to include neutron-nova interactions config which is missing at Open vSwitch OVS plugin - configure controller node

# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ notify_nova_on_port_status_changes True # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ notify_nova_on_port_data_changes True # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ nova_url http://controller:8774/v2 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ nova_admin_username nova # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ nova_admin_tenant_id SERVICE_TENANT_ID # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ nova_admin_password NOVA_PASS # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ nova_admin_auth_url http://controller:35357/v2.0

After adding these, the problem went away. Thanks for the hint!

Regards, Boon Lee

2014-04-13 04:39:42 -0500 commented answer icehouse - Virtual Interface creation failed

Hi there, are you able to tell me specifically where in new neutron.conf?

2014-04-13 04:30:25 -0500 received badge  Famous Question (source)
2014-04-11 00:48:03 -0500 received badge  Notable Question (source)