Ask Your Question

vercetty92's profile - activity

2019-08-12 14:29:28 -0500 received badge  Famous Question (source)
2019-08-12 14:29:28 -0500 received badge  Notable Question (source)
2019-04-12 09:59:41 -0500 received badge  Taxonomist
2018-04-24 11:11:38 -0500 asked a question masakari issue with tox -egenconfig

Hi Folks,

i'm trying to install and use masakari, which is an officially supported module for openstack (https://github.com/openstack/masakari...).

I have the following error at the step "tox -egenconfig":

Traceback (most recent call last):
File "/usr/bin/tox", line 9, in <module>
load_entry_point('tox==1.4.2', 'console_scripts', 'tox')()
File "/usr/lib/python2.7/site-packages/tox/_cmdline.py", line 24, in main
config = parseconfig(args, 'tox')
File "/usr/lib/python2.7/site-packages/tox/_config.py", line 23, in parseconfig
parseini(config)
File "/usr/lib/python2.7/site-packages/tox/_config.py", line 170, in __init__
config)
File "/usr/lib/python2.7/site-packages/tox/_config.py", line 221, in _makeenvconfig
vc.commands = reader.getargvlist(section, "commands")
File "/usr/lib/python2.7/site-packages/tox/_config.py", line 331, in getargvlist
commandlist.append(self._processcommand(current_command))
File "/usr/lib/python2.7/site-packages/tox/_config.py", line 351, in _processcommand
new_word = re.sub(pat, self._replace_match, word)
File "/usr/lib64/python2.7/re.py", line 151, in sub
return _compile(pattern, flags).sub(repl, string, count)
File "/usr/lib/python2.7/site-packages/tox/_config.py", line 485, in _replace_match
return handler(match)
File "/usr/lib/python2.7/site-packages/tox/_config.py", line 455, in _replace_substitution
"substitution key %r not found" % sub_key)
tox.ConfigError: ConfigError: substitution key '[testenv]commands' not found

I'm using centOS7 x64 Python 2.7.5 python-tox-1.4.2-8

I suspected the issue is coming from python-tox version

I tried to find a more recent version of python-tox, but the only I found was 2.3.1-1 and it requires python-virtualenv >= 1.11.2, which is impossible to find for rhel7/centos7....

Does anyone successfully configured masakari on centos7/rhel7?

Thx

2018-04-18 04:48:42 -0500 asked a question instance unreachable (in/out) network option 1 all in one

Hi Folks,

I have an all in one openstack (queen) node (which is both controler, network, storage and compute node)

I use network option 1 and followed the doc from openstack web site to set it up.

The thing is that I the instances can't be contacted from the node or even from another machine in the same network.

I have read a lot of things but it always mention "router", and if I'm correct the router are only needed in network option2.

did I miss something or did something wrong?

Thx a lot in advance

Here is the content of neutron.conf:

[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins =
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
dhcp_agents_per_network = 2
l3_ha = true
max_l3_agents_per_router = 2
min_l3_agents_per_router = 2
transport_url = rabbit://openstack:xxxx@10.106.1.101
[database]
connection = mysql+pymysql://neutron:xxxx@10.106.1.101/neutron
[keystone_authtoken]
auth_uri = http://10.106.1.101:5000
auth_url = http://10.106.1.101:35357
memcached_servers = 10.106.1.101:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = xxxx
[nova]
auth_url = http://10.106.1.101:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = xxx
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

the content of /etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true

content of /etc/neutron/plugins/ml2/linuxbridge_agent.ini:

[linux_bridge]
physical_interface_mappings = provider:bond0
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = false

Content of /etc/neutron/dhcp_agent.ini:

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

content of /etc/neutron/metadata_agent.ini:

[DEFAULT]
nova_metadata_host = 10.106.1.101
metadata_proxy_shared_secret = xxxx

symlink /etc/neutron/plugins/ml2/ml2_conf.ini --> /etc/neutron/plugin.ini exists

br_netfilter is enable net.bridge.bridge-nf-call-iptables & net.bridge.bridge-nf-call-ip6tables are set to 1

ip a:

1: lo: <loopback,up,lower_up> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens161: <broadcast,multicast,slave,up,lower_up> mtu 1500 qdisc mq master bond0 state UP qlen 1000
link/ether 00:50:56:8c:d6:63 brd ff:ff:ff:ff:ff:ff
3: ens193: <broadcast,multicast,slave,up,lower_up> mtu 1500 qdisc mq master bond0 state UP qlen 1000
link/ether 00:50:56:8c:d6:63 brd ff:ff:ff:ff:ff:ff
4: ens224: <broadcast,multicast,slave,up,lower_up> mtu 1500 qdisc mq master bond1 state UP qlen 1000
link/ether 00:50:56:8c:bc:eb brd ff:ff:ff:ff:ff:ff
5: ens256: <broadcast,multicast,slave,up,lower_up> mtu 1500 qdisc mq master bond1 state UP qlen 1000
link/ether 00:50:56:8c:bc:eb brd ff:ff:ff:ff:ff:ff
6: bond0: <broadcast,multicast,master,up,lower_up> mtu 1500 qdisc noqueue master brq8a70bed4-8a ...

(more)
2018-04-11 03:28:42 -0500 received badge  Famous Question (source)
2018-04-06 08:10:35 -0500 commented answer neutron error on second controller : Failed to bind port when create instance

Hi, I don't have any l3 service or log running. l3_agent is not only present when we use network option 2? thx

2018-04-06 04:23:47 -0500 received badge  Self-Learner (source)
2018-04-06 04:23:47 -0500 received badge  Teacher (source)
2018-04-06 03:41:49 -0500 answered a question multi nodes nova vnc setup (error code 1006)

ok so I have recreated a cluster from scratch and it's now ok. Here is the setup on each node:

[vnc]
enabled=true
server_listen=0.0.0.0
server_proxyclient_address=$my_ip
novncproxy_base_url=http://10.106.1.101:6080/vnc_auto.html (this is the HA vip)

and effectively the service openstack-nova-consoleauth need to run only on one controller, so I create a pacemaker group that contains the ha VIP, and a consoleauth systemd resource.

2018-04-05 23:42:13 -0500 received badge  Notable Question (source)
2018-04-05 23:42:13 -0500 received badge  Popular Question (source)
2018-04-05 07:43:02 -0500 received badge  Editor (source)
2018-04-05 07:42:22 -0500 asked a question neutron error on second controller : Failed to bind port when create instance

Hi Folks,

I have a 2 nodes openstack cluster (10.106.1.90 & 10.106.1.99). Both nodes are controller and compute.

I use a pacemaker Virtual IP (10.106.1.101) to access the dashboard. All the services are running in active/active mode.

when the fist node own the VIP, I got no problem to create an instance. But when I move the VIP to the second controller, I can't create a new instance (no matter if I create it on the first node or the second node with --availability-zone.

I always have this error in neutron/server.log on the second controller:

ERROR neutron.plugins.ml2.managers [req-2f4a2cf5-7db8-416c-ae32-f6f3f558b898 6b2908509afa46b7bdd2706a6b30bbd6 e98b83da4c184979ba862611360ec45d - default default] Failed to bind port 0de77f5c-3bb1-439b-984e-981736968328 on host openstack-2 for vnic_type normal using segments [{'network_id': '8a70bed4-8a5b-41eb-a2bc-cff61d9836a9', 'segmentation_id': None, 'physical_network': u'provider', 'id': 'b22b146c-7402-46a2-a3b2-9443173808b6', 'network_type': u'flat'}]

I use network option 1, here are the changes I have applied in neutron.conf:

[DEFAULT]
...
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:xxxx@10.106.1.90,openstack:xxxx@10.106.1.99
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
dhcp_agents_per_network = 2
[database]
...
connection = mysql+pymysql://neutron:xxxx@10.106.1.101/neutron
[keystone_authtoken]
auth_uri = http://10.106.1.101:5000
auth_url = http://10.106.1.101:35357
memcached_servers = 10.106.1.90:11211,10.106.1.99:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = xxxxx [nova]
auth_url = http://10.106.1.101:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = xxxx
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
….
[oslo_messaging_rabbit]
rabbit_retry_interval=1
rabbit_retry_backoff=2
rabbit_max_retries=0
rabbit_durable_queues=true
rabbit_ha_queues=true

changes done on /etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
...
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
...
flat_networks = provider
[securitygroup]
...
enable_ipset = true

changes done on /etc/neutron/plugins/ml2/linuxbridge_agent.ini:

[linux_bridge]
physical_interface_mappings = provider:bond0
[vxlan]
enable_vxlan = false
[securitygroup]
...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

changes done on /etc/neutron/dhcp_agent.ini:

[DEFAULT]
...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

changes done on /etc/neutron/metadata_agent.ini:

[DEFAULT]
nova_metadata_host = 10.106.1.101
metadata_proxy_shared_secret = xxxx

The symlink /etc/neutron/plugin.ini have been created, net.bridge.bridge-nf-call-iptables and net.bridge.bridge-nf-call-ip6tables are set to "1", and b-netfilter module is loaded on both controllers.

this command have been done on each controllers:

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

Network have been created like that:

openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider

subnet have been created like that:

openstack subnet create --network provider --allocation-pool start=10.106.1.102,end=10.106.1.110 --dns-nameserver 10.106.1.253 --gateway 10.106.1.254 --subnet-range 10.106.1.0/24 provider

did I missed something?

Thx a lot in advance

2018-04-05 07:41:54 -0500 asked a question neutron error on second controller : Failed to bind port

Hi Folks,

I have a 2 nodes openstack cluster (10.106.1.90 & 10.106.1.99). Both nodes are controller and compute.

I use a pacemaker Virtual IP (10.106.1.101) to access the dashboard. All the services are running in active/active mode.

when the fist node own the VIP, I got no problem to create an instance. But when I move the VIP to the second controller, I can't create a new instance (no matter if I create it on the first node or the second node with --availability-zone.

I always have this error in neutron/server.log on the second controller:

ERROR neutron.plugins.ml2.managers [req-2f4a2cf5-7db8-416c-ae32-f6f3f558b898 6b2908509afa46b7bdd2706a6b30bbd6 e98b83da4c184979ba862611360ec45d - default default] Failed to bind port 0de77f5c-3bb1-439b-984e-981736968328 on host openstack-2 for vnic_type normal using segments [{'network_id': '8a70bed4-8a5b-41eb-a2bc-cff61d9836a9', 'segmentation_id': None, 'physical_network': u'provider', 'id': 'b22b146c-7402-46a2-a3b2-9443173808b6', 'network_type': u'flat'}]

I use network option 1, here are the changes I have applied in neutron.conf:

[DEFAULT]
...
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:soleil@10.106.1.90,openstack:soleil@10.106.1.99
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
dhcp_agents_per_network = 2
[database]
...
connection = mysql+pymysql://neutron:soleil@10.106.1.101/neutron
[keystone_authtoken]
auth_uri = http://10.106.1.101:5000
auth_url = http://10.106.1.101:35357
memcached_servers = 10.106.1.90:11211,10.106.1.99:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = xxxxx [nova]
auth_url = http://10.106.1.101:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = xxxx
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
….
[oslo_messaging_rabbit]
rabbit_retry_interval=1
rabbit_retry_backoff=2
rabbit_max_retries=0
rabbit_durable_queues=true
rabbit_ha_queues=true

changes done on /etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
...
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
...
flat_networks = provider
[securitygroup]
...
enable_ipset = true

changes done on /etc/neutron/plugins/ml2/linuxbridge_agent.ini:

[linux_bridge]
physical_interface_mappings = provider:bond0
[vxlan]
enable_vxlan = false
[securitygroup]
...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

changes done on /etc/neutron/dhcp_agent.ini:

[DEFAULT]
...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

changes done on /etc/neutron/metadata_agent.ini:

[DEFAULT]
nova_metadata_host = 10.106.1.101
metadata_proxy_shared_secret = xxxx

The symlink /etc/neutron/plugin.ini have been created, net.bridge.bridge-nf-call-iptables and net.bridge.bridge-nf-call-ip6tables are set to "1", and b-netfilter module is loaded on both controllers.

this command have been done on each controllers:

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

did I missed something?

Thx a lot in advance

2018-03-29 05:07:58 -0500 received badge  Famous Question (source)
2018-03-23 07:15:49 -0500 commented question INFO nova.consoleauth.manager [req-db1e6559-91c4-4412-b908-c323aae1161a - - - - -] Checking Token: bb41fdbe-5cf3-4627-9ba1-7d48849caf22, False

Hi, did you solved your issue? I have the same pb

2018-03-23 05:06:39 -0500 received badge  Supporter (source)
2018-03-23 04:58:05 -0500 commented question multi nodes nova vnc setup (error code 1006)

any one having a multi node cluster with both controller and compute role on each node with a working vnc console? thx

2018-03-22 05:38:11 -0500 received badge  Student (source)
2018-03-22 04:39:54 -0500 commented answer multi nodes nova vnc setup (error code 1006)

ok so it's like all the finding I had, most of the people have controller and compute role separated. my goal is to have all the role on all the nodes, but seems it's really not standard.

2018-03-22 04:31:55 -0500 received badge  Notable Question (source)
2018-03-21 05:21:11 -0500 received badge  Popular Question (source)
2018-03-21 04:55:40 -0500 commented answer multi nodes nova vnc setup (error code 1006)

Yes I have refreshed the console tab a lot of time. On your setup all your hosts are both controller and compute? and all the openstack services are running on each node? (at least nova-api nova-compute nova-conductor nova-consoleauth nova-novncproxy nova-scheduler) Thx

2018-03-21 04:21:26 -0500 received badge  Popular Question (source)
2018-03-21 04:05:07 -0500 commented answer multi nodes nova vnc setup (error code 1006)

Hi Deepa, the command you provided give me no output, is it normal?. But I know I can connect to the console without using the vnc proxy and by targeting port 5900 of the host that own the instance. I did the changes you mentioned (novnc_enabled & vnc_keymap was not present) but same thing. thx

2018-03-21 03:46:34 -0500 commented question multi nodes nova vnc setup (error code 1006)

Hi Bernd, even if I try to stop nova_consoleauth on 2 node, same issue, but in this case the 2 messages I mentioned above are displayed in the samenode (where consoleauth is still running)

2018-03-20 10:21:57 -0500 asked a question multi nodes nova vnc setup (error code 1006)

Hi Folks,

I'm going crazy on how to setup the [vnc] part of nova.conf.

I have 3 nodes, all of them are both controller and compute host

10.106.1.90 --> node1

10.106.1.99 --> node2

10.106.1.100 --> node3

10.106.1.101 --> ha vip used by keepalived

I tried several different configuration for the [vnc] part of nova.conf file, but I always have a "error code 1006" when I try to access an instance console via vnc. I think it's only a vnc proxy issue because I can have a working console if I connect using a vnc client to the IP of the node running the instance (on port 5900)

the 3 nodes are using several memcached server (each node have a memcache server)

I found several tuto but most of the time controller and compute roles are not on the same node.

What should be the content of the vnc part on each node? Last attempt I did was like that:

[vnc]

enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = $my_ip

novncproxy_base_url = http://10.106.1.101:6080/vnc_auto.html

on nova logs I can see that this message is present on a different node every time I try to access the console:

2018-03-20 16:58:17.242 1430 INFO nova.consoleauth.manager [req-01c5432d-1a74-4f2f-996c-7d76b3a1159c 6c332937dab249af916428758e416821 73e4d78698ea42afb8f0a48b9bd8bcc9 - default default] Received Token: 1f150a60-3109-4ed7-b44e-9002fb21e408, {'instance_uuid': u'f48cb7b9-91f4-4c42-9187-af081baf8292', 'access_url': u'http://10.106.1.101:6080/vnc_auto.html?token=1f150a60-3109-4ed7-b44e-9002fb21e408', 'token': u'1f150a60-3109-4ed7-b44e-9002fb21e408', 'last_activity_at': 1521561497.239911, 'internal_access_path': None, 'console_type': u'novnc', 'host': u'10.106.1.99', 'port': u'5900'}

and got this on another node (random too):

2018-03-20 16:58:22.011 1397 INFO nova.consoleauth.manager [req-fe3a012b-eb3c-4866-a1d2-3963726ea93c - - - - -] Checking Token: 1f150a60-3109-4ed7-b44e-9002fb21e408, False

even if both messages are generated on the same node, I have some problem

Thx a lot in advance

2018-02-20 06:16:56 -0500 received badge  Enthusiast
2018-02-15 08:00:14 -0500 asked a question [osa] issue during "Run container veth wiring script" using "/usr/local/bin/lxc-veth-wiring"

Hi Folks,

I try to deploy an openstack cluster. I have 3 nodes, and all the nodes will have all the roles (to have real HA). one of them is the deployment host.

During the deployment of setup-hosts.yml playbook, there is an error at the last step "Run container veth wiring script"

Errors are here : https://pastebin.com/Y74D9nfi openstack_user_config.yml is here : https://pastebin.com/5Crms6kD

I tried to manually launch the lxc-veth-wiring script, example with the first error in the list : /usr/local/bin/lxc-veth-wiring openstack-1_keystone_container-9726163a 9726163a_eth0 eth0 br-mgmt

First execution of the command return 3, and second return 0, then if I launch agian the playbook I don't have the error anymore.

Do you know what is the issue?

Thx a lo in advance