Ask Your Question

Lucas LAMBOLEY's profile - activity

2016-08-29 22:36:14 -0500 received badge  Famous Question (source)
2016-08-29 22:36:14 -0500 received badge  Notable Question (source)
2016-05-30 07:00:53 -0500 received badge  Famous Question (source)
2016-05-23 21:43:27 -0500 received badge  Famous Question (source)
2016-04-25 02:00:07 -0500 received badge  Commentator
2016-04-18 15:08:26 -0500 received badge  Famous Question (source)
2016-04-17 15:48:36 -0500 received badge  Notable Question (source)
2016-04-15 12:11:45 -0500 received badge  Popular Question (source)
2016-04-15 03:18:36 -0500 asked a question Liberty to Mitaka ?

Hi,

Is it possible to upgrade Liberty to mitaka ? Can i do that manually, by change repo, stop service and upgrade service. With backend config of service and BD ? Or, there is a tool to upgrade easier ?

Thanks u.

2016-03-21 10:56:45 -0500 received badge  Popular Question (source)
2016-03-18 08:04:21 -0500 commented answer HAProxy, Keepalived and neutron.

Sorry, It's works now. I was concentred on openstack service and i have completely forgotten the non-api service, like Galera cluster for maria, etc. I'm going to use link to the core non-openstack service Thx

2016-03-18 04:38:45 -0500 received badge  Famous Question (source)
2016-03-18 03:51:14 -0500 received badge  Associate Editor (source)
2016-03-18 03:43:56 -0500 asked a question HAProxy, Keepalived and neutron.

Hi,

I'm in trouble. I want to do HA on my Openstack service, so i decide to do HAproxy and Keepalived. Keepalived provide HA for VIP using by HAProxy, And HAproxy do load-balancing using the VIP and the port of the service. But ... If by example, i have a network node. So i double that. If network node one go down, so HAproxy redirect to other API network node, but the network created in the network node one, i can't got him on the seconde network node, because there are one the one ... It's the same for glance image, cinder volume etc. What i don't understand ? Help me please, i can't work when i don't understand, and i don't see any response at my question ... Sorry for my english. Best Regards, L.L

EDIT1: Need i configure the same database for all node ?

2016-02-22 04:28:20 -0500 marked best answer Network in openstack.

Hi,

I am an apprentice, and, my project is to deploy an openstack of test with 3 nodes, but i am in trouble with the network in openstack. So i have some question.

i see example architecture with one interface for controller, 3 for the network, and 2 for the compute. So what is the good architecure ? 2 interfaces for each node ? or 1 for contr, 2 for compute and 3 for network ? in my book andthe doc, i can see every node need two interfaces. The first for the management network, and the other for the public network. If i understand, the management network is used for communicate with the API, etc with node. The public node is for the rest. Where i got floating IP for VM, etc. Two interfaces in the network node, to connect to public. Two for the compute network, to communicate with network node with public network. But why controller need two interfaces ? For use Horizon in the public network ?

Have i right ? Or, i don't understand anything ? If i don't understand, please, explain too me, and don't said search. i have some book about openstack, neutron, and i read the document, but ... :/

And, if i am right, what is the address of network ? I got static address in range 172.17.7.x. So, public network is in this range, and the management network is in 10.x.x.x ? if yes, why ? For security ? or just for separate data VM of communication node ? For the moment, i have just one interface on each node, so i can't test for see if i have right.

Sorry for my bad english. Sorry for my incompetence, i'm new in network, and i want to learn a lot. Thanks you.

PS: I'am using liberty, and Centos 7.

2016-02-22 03:52:41 -0500 commented answer Pacemaker or Nagios for hight-availability ?

Ok ok. Thanks a lot for these information. I go in your link. Bad english, sry >< Bye.

2016-02-22 02:56:25 -0500 received badge  Supporter (source)
2016-02-22 02:56:07 -0500 marked best answer agent-list all fail

Hi,

I install openstack with packstack. I configure vxlan, i create network,ovs, router, vm, etc, all works. BUT, after ONE reboot, all agent-list are down.

    [root@controller ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 05eb1336-2a07-4541-b924-5bb7a4d37198 | L3 agent           | network  | xxx   | True           | neutron-l3-agent          |
| 3303094b-6239-4982-92c7-9a8249f73366 | Open vSwitch agent | compute2 | xxx   | True           | neutron-openvswitch-agent |
| 4e1466ed-59b2-4e3b-996c-110d89b74e30 | Open vSwitch agent | compute1 | xxx   | True           | neutron-openvswitch-agent |
| 820e388d-df54-4459-860c-d1eaf672e832 | Open vSwitch agent | network  | xxx   | True           | neutron-openvswitch-agent |
| 82b3e531-4d60-4131-adda-f19203895f64 | DHCP agent         | network  | xxx   | True           | neutron-dhcp-agent        |
| 85ab299e-ce0b-4fb2-8704-ffe574ff2570 | Metadata agent     | network  | xxx   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

Now, the network fail. In log of compute et network, for openvswitch, etc, 0 ERROR. But, in my log neutron-server on controller, i can see :

2016-01-21 14:05:13.450 2959 DEBUG neutron.db.agents_db [req-300e2b95-4607-466c-95ef-bc830664e0d4 - - - - -] Stale message received with timestamp: 2016-01-21T12:19:39Z. Skipping processing because it's older than the server start timestamp: 2016-01-21T12:52:37Z report_state /usr/lib/python2.7/site-packages/neutron/db/agents_db.py:300
2016-01-21 14:05:13.719 2958 DEBUG neutron.db.agents_db [req-42390666-7664-4894-b945-44c52e164e15 - - - - -] Stale message received with timestamp: 2016-01-21T10:53:06Z. Skipping processing because it's older than the server start timestamp: 2016-01-21T12:52:37Z report_state /usr/lib/python2.7/site-packages/neutron/db/agents_db.py:300
2016-01-21 14:05:14.242 2959 DEBUG neutron.db.agents_db [req-997d32cf-8620-4ad8-98d8-d8ee84021e6e - - - - -] Stale message received with timestamp: 2016-01-21T12:19:40Z. Skipping processing because it's older than the server start timestamp: 2016-01-21T12:52:37Z report_state /usr/lib/python2.7/site-packages/neutron/db/agents_db.py:300
2016-01-21 14:05:14.260 2958 DEBUG neutron.db.agents_db [req-ca78d680-5ebb-46e4-a8fe-49d043d58730 - - - - -] Stale message received with timestamp: 2016-01-21T12:19:40Z. Skipping processing because it's older than the server start timestamp: 2016-01-21T12:52:37Z report_state /usr/lib/python2.7/site-packages/neutron/db/agents_db.py:300
2016-01-21 14:05:14.369 2959 DEBUG neutron.db.agents_db [req-f8a8a933-4ed4-41b1-ab74-3e221613f6aa - - - - -] Stale message received with timestamp: 2016-01-21T11:53:37Z. Skipping processing because it's older than the server start timestamp: 2016-01-21T12:52:37Z report_state /usr/lib/python2.7/site-packages/neutron/db/agents_db.py:300
2016-01-21 14:05:14.537 2958 DEBUG neutron.db.agents_db [req-05a0c450-8b46-41e1-bcd7-3019a53e68c3 - - - - -] Stale message received with timestamp: 2016-01-21T12:19:40Z. Skipping processing because it's older than the server start timestamp: 2016-01-21T12:52:37Z report_state /usr/lib/python2.7/site-packages/neutron/db/agents_db.py:300

6 errors timestamp, and i have six agent down.... Do I need to configure NTPD ? But, the dns in the company doesn't resolve the ntp server pool ...

Thanks u for your help.

Best Regards,

a noob in openstack..

2016-02-22 02:44:29 -0500 answered a question Pacemaker or Nagios for hight-availability ?

@dbaxps Thanks for reply. I was happy when i have deploy my openstack successfully, but i see it was the easy stage. The hard is HA ... I think it's not a bad choice to use ansible and Triple0 for my openstack deployement HA. But i'm in trouble. I have 4 node. 1 controller 1 network and 2 computes. But i see in doc Ansible RDO 3 controllers and 1 compute, and u speak about 3 Controller arch. So there is no network node in HA deployement ? The l3 ml2 dhcp etc are all in controller ?

So personnaly, you propose Ansible & Triple0 because it's easier to understand what u do with, and in case of failure, u can repare the problem rapidely ? it's going to be a reference. And A/A with keepalived because service running simultaneously. u are two to propose A/A HA, so with that and after some gooling, i want to use A/A for my arch.

Hum ... Sorry but what do u understand when u said RH ? ><'

EDIT: Sorry for the late answer, i don't work the week-end.

2016-02-20 13:33:29 -0500 received badge  Notable Question (source)
2016-02-19 08:01:18 -0500 received badge  Popular Question (source)
2016-02-19 06:47:36 -0500 commented answer Pacemaker or Nagios for hight-availability ?

Ok, so i need to see what i want for my architecture, active/passive with pacemaker or active/active with keepalived. If i understand, in active/passive, when a service failed, pacemaker change of host. In active/active more than one service run ? Is it correct ? ^^'

2016-02-19 04:27:29 -0500 asked a question Pacemaker or Nagios for hight-availability ?

HI,

I have a question. What is the best way to got hight-availability of controller and network node ? I want to double these node, so if one dead, i can start the second. I have seen Pacemaker in a lot of article, but i don't know if it's a good idea for a environment production ? Or can i do that with Nagios ?

Thanks for your help. Sorry for my english, L.L

2016-02-19 04:19:20 -0500 received badge  Notable Question (source)
2016-02-17 12:05:21 -0500 received badge  Popular Question (source)
2016-02-17 08:35:20 -0500 marked best answer VM snapshot if compute down

Hi,

Imagine, i have 2 computes node, with name compute1 and compute2. A VM named "VM1" run in compute1. If the compute1, at instant T, go down, by accident, how can i assure the service of my VM1 ? With snapshot ? But if i use snapshot, i'm going to lose data, if the snapshot is not a snapshot of my VM at instant T. And so, the snapshot don't save the IP of VM, etc. So it's a bad idea.

With live-migration and migrate ? But if compute1 is down, how can i migrate VM1 in compute2 if compute1 is down ? Is it possible ?...

Help me please, i have no idea. Best Regards, L.L

2016-02-17 08:35:20 -0500 received badge  Scholar (source)
2016-02-17 06:29:55 -0500 received badge  Famous Question (source)
2016-02-17 06:27:06 -0500 commented question compute use 1 vcpu but 0 instances on him

Solved. Bad configuration in /etc/nova/nova.conf

2016-02-16 09:34:28 -0500 asked a question compute use 1 vcpu but 0 instances on him

Hi !

http://image.noelshack.com/fichiers/2...

Any idea ?

Regards, L.L

2016-02-16 06:27:44 -0500 commented question Some compute nodes are listed as down.

Is NTP configured ?

2016-02-16 06:18:09 -0500 received badge  Notable Question (source)
2016-02-16 02:47:20 -0500 edited question novnc acces denied

Hi,

I have some trouble with VNC. I have 4 nodes, two computes, one control and one network. I have two instance, one on compute1 and other in compute2. When i want to go on console with dashboard, it's failed with acces denied. If i test with nova get-vnc-console id-vm novnc, i got the same issue.

So i have tried to go on http://IP_CONTROLLER:6080, i got acces denied. BUT, when i am in controller, and i do http://127.0.0.1:6080 i got novnc ... So with localhost it's work but not with the IP.. i can change base_url to 127.0.0.1, but if i do that i can acces to instance only on controller..

I don't know the problem, is it apache ? or it's vnc config ?

CONTROLLER

novncproxy_host=0.0.0.0
vncserver_host= 0.0.0.0
novncproxy_port=6080
novncproxy_base_url=http://controller:6080/vnc_auto.html
vnc_port=5900
[vnc]
novncproxy_base_url=http://10.0.0.1:6080/vnc_auto.html
vncserver_listen=10.0.0.1
vncserver_proxyclient_address=10.0.0.1
vnc_enabled=true

COMPUTEX

novncproxy_host=0.0.0.0
novncproxy_port=6080
vncserver_proxyclient_address=compute1
vnc_keymap=en-us
vnc_enabled=True
vncserver_listen=10.0.0.3
novncproxy_base_url=http://10.0.0.1:6080/vnc_auto.html
[vnc]
novncproxy_base_url=http://controller:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=0.0.0.0

Thanks for your HELP :)

EDIT: it's not iptables.

[root@controller ~(keystone_lucas)]# iptables -nvL
Chain INPUT (policy ACCEPT 89185 packets, 21M bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 82611 packets, 14M bytes)
 pkts bytes target     prot opt in     out     source               destination

EDIT2: wget 127.0.0.1:6080 and wget controller:6080 work. I haven't acces denied when wget controller:6080.

EDIT3: vncserver works :

 [root@controller ~(keystone_lucas)]# netstat -an | grep 6080
tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN

EDIT4: It failed when novnc want to connect to port 5900. Pourt 5900 open in compute. If i do telnet 10.0.0.4 5900 it's said connection refused, but if i do telnet 127.0.0.1 5900 it's works.. Like port 6080.

EDIT5: Only accept connection on 127.0.0.1:5900, i need to change it, but i don't know how.

tcp        0      0 127.0.0.1:5900          0.0.0.0:*               LISTEN

EDIT6: It's works if base_url = 127.0.0.1. I have seen #vnc_port=5900. So with talent i have thinking there is a vnc_host=X. So i have create it, and set 0.0.0.0. And it work. I think it's the same problem with the port 6080.

[root@compute2 ~]# netstat -an | grep 5900
tcp        0      0 10.0.0.4:5900           0.0.0.0:*               LISTEN     
tcp        0      0 10.0.0.4:5900           10.0.0.1:53553          ESTABLISHED

Sorry for my ... (more)