2019-08-12 14:29:28 -0500 | 获得奖牌 | ● 著名问题 (source) |
2019-08-12 14:29:28 -0500 | 获得奖牌 | ● 热门的问题 (source) |
2019-04-12 09:59:41 -0500 | 获得奖牌 | ● 分类学者 |
2018-04-24 11:11:38 -0500 | 问了问题 | masakari issue with tox -egenconfig Hi Folks, i'm trying to install and use masakari, which is an officially supported module for openstack (https://github.com/openstack/masakari...). I have the following error at the step "tox -egenconfig":
I'm using centOS7 x64 Python 2.7.5 python-tox-1.4.2-8 I suspected the issue is coming from python-tox version I tried to find a more recent version of python-tox, but the only I found was 2.3.1-1 and it requires python-virtualenv >= 1.11.2, which is impossible to find for rhel7/centos7.... Does anyone successfully configured masakari on centos7/rhel7? Thx |
2018-04-18 04:48:42 -0500 | 问了问题 | instance unreachable (in/out) network option 1 all in one Hi Folks, I have an all in one openstack (queen) node (which is both controler, network, storage and compute node) I use network option 1 and followed the doc from openstack web site to set it up. The thing is that I the instances can't be contacted from the node or even from another machine in the same network. I have read a lot of things but it always mention "router", and if I'm correct the router are only needed in network option2. did I miss something or did something wrong? Thx a lot in advance Here is the content of neutron.conf:
the content of /etc/neutron/plugins/ml2/ml2_conf.ini:
content of /etc/neutron/plugins/ml2/linuxbridge_agent.ini:
Content of /etc/neutron/dhcp_agent.ini:
content of /etc/neutron/metadata_agent.ini:
symlink /etc/neutron/plugins/ml2/ml2_conf.ini --> /etc/neutron/plugin.ini exists br_netfilter is enable net.bridge.bridge-nf-call-iptables & net.bridge.bridge-nf-call-ip6tables are set to 1 ip a: (more) |
2018-04-11 03:28:42 -0500 | 获得奖牌 | ● 著名问题 (source) |
2018-04-06 08:10:35 -0500 | 评论回答 | neutron error on second controller : Failed to bind port when create instance Hi, I don't have any l3 service or log running. l3_agent is not only present when we use network option 2? thx |
2018-04-06 04:23:47 -0500 | 获得奖牌 | ● 老师 (source) |
2018-04-06 04:23:47 -0500 | 获得奖牌 | ● 自学者 (source) |
2018-04-06 03:41:49 -0500 | 回答问题 | multi nodes nova vnc setup (error code 1006) ok so I have recreated a cluster from scratch and it's now ok. Here is the setup on each node:
and effectively the service openstack-nova-consoleauth need to run only on one controller, so I create a pacemaker group that contains the ha VIP, and a consoleauth systemd resource. |
2018-04-05 23:42:13 -0500 | 获得奖牌 | ● 受欢迎的问题 (source) |
2018-04-05 23:42:13 -0500 | 获得奖牌 | ● 热门的问题 (source) |
2018-04-05 07:43:02 -0500 | 获得奖牌 | ● 编辑 (source) |
2018-04-05 07:42:22 -0500 | 问了问题 | neutron error on second controller : Failed to bind port when create instance Hi Folks, I have a 2 nodes openstack cluster (10.106.1.90 & 10.106.1.99). Both nodes are controller and compute. I use a pacemaker Virtual IP (10.106.1.101) to access the dashboard. All the services are running in active/active mode. when the fist node own the VIP, I got no problem to create an instance. But when I move the VIP to the second controller, I can't create a new instance (no matter if I create it on the first node or the second node with --availability-zone. I always have this error in neutron/server.log on the second controller:
I use network option 1, here are the changes I have applied in neutron.conf:
changes done on /etc/neutron/plugins/ml2/ml2_conf.ini:
changes done on /etc/neutron/plugins/ml2/linuxbridge_agent.ini:
changes done on /etc/neutron/dhcp_agent.ini:
changes done on /etc/neutron/metadata_agent.ini:
The symlink /etc/neutron/plugin.ini have been created, net.bridge.bridge-nf-call-iptables and net.bridge.bridge-nf-call-ip6tables are set to "1", and b-netfilter module is loaded on both controllers. this command have been done on each controllers:
Network have been created like that:
subnet have been created like that:
did I missed something? Thx a lot in advance |
2018-04-05 07:41:54 -0500 | 问了问题 | neutron error on second controller : Failed to bind port Hi Folks, I have a 2 nodes openstack cluster (10.106.1.90 & 10.106.1.99). Both nodes are controller and compute. I use a pacemaker Virtual IP (10.106.1.101) to access the dashboard. All the services are running in active/active mode. when the fist node own the VIP, I got no problem to create an instance. But when I move the VIP to the second controller, I can't create a new instance (no matter if I create it on the first node or the second node with --availability-zone. I always have this error in neutron/server.log on the second controller:
I use network option 1, here are the changes I have applied in neutron.conf:
changes done on /etc/neutron/plugins/ml2/ml2_conf.ini:
changes done on /etc/neutron/plugins/ml2/linuxbridge_agent.ini:
changes done on /etc/neutron/dhcp_agent.ini:
changes done on /etc/neutron/metadata_agent.ini:
The symlink /etc/neutron/plugin.ini have been created, net.bridge.bridge-nf-call-iptables and net.bridge.bridge-nf-call-ip6tables are set to "1", and b-netfilter module is loaded on both controllers. this command have been done on each controllers:
did I missed something? Thx a lot in advance |
2018-03-29 05:07:58 -0500 | 获得奖牌 | ● 著名问题 (source) |
2018-03-23 07:15:49 -0500 | 评论问题 | INFO nova.consoleauth.manager [req-db1e6559-91c4-4412-b908-c323aae1161a - - - - -] Checking Token: bb41fdbe-5cf3-4627-9ba1-7d48849caf22, False Hi, did you solved your issue? I have the same pb |
2018-03-23 05:06:39 -0500 | 获得奖牌 | ● 拥护者 (source) |
2018-03-23 04:58:05 -0500 | 评论问题 | multi nodes nova vnc setup (error code 1006) any one having a multi node cluster with both controller and compute role on each node with a working vnc console? thx |
2018-03-22 05:38:11 -0500 | 获得奖牌 | ● 学生 (source) |
2018-03-22 04:39:54 -0500 | 评论回答 | multi nodes nova vnc setup (error code 1006) ok so it's like all the finding I had, most of the people have controller and compute role separated. my goal is to have all the role on all the nodes, but seems it's really not standard. |
2018-03-22 04:31:55 -0500 | 获得奖牌 | ● 热门的问题 (source) |
2018-03-21 05:21:11 -0500 | 获得奖牌 | ● 受欢迎的问题 (source) |
2018-03-21 04:55:40 -0500 | 评论回答 | multi nodes nova vnc setup (error code 1006) Yes I have refreshed the console tab a lot of time. On your setup all your hosts are both controller and compute? and all the openstack services are running on each node? (at least nova-api nova-compute nova-conductor nova-consoleauth nova-novncproxy nova-scheduler) Thx |
2018-03-21 04:21:26 -0500 | 获得奖牌 | ● 受欢迎的问题 (source) |
2018-03-21 04:05:07 -0500 | 评论回答 | multi nodes nova vnc setup (error code 1006) Hi Deepa, the command you provided give me no output, is it normal?. But I know I can connect to the console without using the vnc proxy and by targeting port 5900 of the host that own the instance. I did the changes you mentioned (novnc_enabled & vnc_keymap was not present) but same thing. thx |
2018-03-21 03:46:34 -0500 | 评论问题 | multi nodes nova vnc setup (error code 1006) Hi Bernd, even if I try to stop nova_consoleauth on 2 node, same issue, but in this case the 2 messages I mentioned above are displayed in the samenode (where consoleauth is still running) |
2018-03-20 10:21:57 -0500 | 问了问题 | multi nodes nova vnc setup (error code 1006) Hi Folks, I'm going crazy on how to setup the [vnc] part of nova.conf. I have 3 nodes, all of them are both controller and compute host 10.106.1.90 --> node1 10.106.1.99 --> node2 10.106.1.100 --> node3 10.106.1.101 --> ha vip used by keepalived I tried several different configuration for the [vnc] part of nova.conf file, but I always have a "error code 1006" when I try to access an instance console via vnc. I think it's only a vnc proxy issue because I can have a working console if I connect using a vnc client to the IP of the node running the instance (on port 5900) the 3 nodes are using several memcached server (each node have a memcache server) I found several tuto but most of the time controller and compute roles are not on the same node. What should be the content of the vnc part on each node? Last attempt I did was like that:
on nova logs I can see that this message is present on a different node every time I try to access the console:
and got this on another node (random too):
even if both messages are generated on the same node, I have some problem Thx a lot in advance |
2018-02-20 06:16:56 -0500 | 获得奖牌 | ● 粉丝 |
2018-02-15 08:00:14 -0500 | 问了问题 | [osa] issue during "Run container veth wiring script" using "/usr/local/bin/lxc-veth-wiring" Hi Folks, I try to deploy an openstack cluster. I have 3 nodes, and all the nodes will have all the roles (to have real HA). one of them is the deployment host. During the deployment of setup-hosts.yml playbook, there is an error at the last step "Run container veth wiring script" Errors are here : https://pastebin.com/Y74D9nfi openstack_user_config.yml is here : https://pastebin.com/5Crms6kD I tried to manually launch the lxc-veth-wiring script, example with the first error in the list : /usr/local/bin/lxc-veth-wiring openstack-1_keystone_container-9726163a 9726163a_eth0 eth0 br-mgmt First execution of the command return 3, and second return 0, then if I launch agian the playbook I don't have the error anymore. Do you know what is the issue? Thx a lo in advance |