Ask Your Question

msanabria11's profile - activity

2016-08-01 07:21:09 -0500 received badge  Student (source)
2016-06-16 04:20:35 -0500 received badge  Famous Question (source)
2016-06-14 21:46:15 -0500 received badge  Famous Question (source)
2016-06-10 07:19:36 -0500 received badge  Notable Question (source)
2016-06-09 13:20:25 -0500 received badge  Popular Question (source)
2016-06-09 09:05:58 -0500 asked a question Instances power off by itself

Hello,

I am running OpenStack Mitaka on a 5 node cluster, 1 controller/network node and 4 compute nodes. I am able to create VMs and use them normally throughout my day but when I come back in the morning some instances are powered off.

I've checked to make sure there is enough disk space (OK). I've checked if the problem is on any compute node (YES). I've also checked the RAM and CPU usage (OK).

I've tried to change the handle_virt_lifecycle_events variable in nova.conf to true and false with no success as some research has led me to.

It seems like the power state from the DB to the hypervisor (KVM) being mismatched is causing this.

Any ideas?

Here's the only relevant information that I could fine from /var/log/nova/nova-compute.log:

2016-06-08 21:20:18.208 1738 INFO nova.compute.manager [-] [instance: 493ad84c-1e46-410c-8fab-078c77ddea10] During _sync_instance_power_state the DB power_state (1) does not match the vm_power_state from the hypervisor (4). Updating power_state in the DB to match the hypervisor.
2016-06-08 21:20:18.228 1738 INFO nova.compute.manager [-] [instance: 2dcb6116-55ab-4987-a4f5-ce3e0f1edda4] During _sync_instance_power_state the DB power_state (1) does not match the vm_power_state from the hypervisor (4). Updating power_state in the DB to match the hypervisor.
2016-06-08 21:20:18.232 1738 INFO nova.compute.manager [-] [instance: d1d0172d-20aa-4333-b9e9-aabc064287bc] During _sync_instance_power_state the DB power_state (1) does not match the vm_power_state from the hypervisor (4). Updating power_state in the DB to match the hypervisor.
2016-06-08 21:20:18.298 1738 WARNING nova.compute.manager [-] [instance: 493ad84c-1e46-410c-8fab-078c77ddea10] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, original DB power_state: 1, current VM power_state: 4
2016-06-08 21:20:18.314 1738 WARNING nova.compute.manager [-] [instance: 2dcb6116-55ab-4987-a4f5-ce3e0f1edda4] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, original DB power_state: 1, current VM power_state: 4
2016-06-08 21:20:18.334 1738 WARNING nova.compute.manager [-] [instance: d1d0172d-20aa-4333-b9e9-aabc064287bc] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, original DB power_state: 1, current VM power_state: 4
2016-05-25 00:15:39 -0500 received badge  Notable Question (source)
2016-05-14 21:20:02 -0500 commented answer Instance will not launch. Neutron error on compute node.

I meant to post my own answer but this was the cause. I needed to install ipset. Thank you!

2016-05-14 21:19:35 -0500 received badge  Supporter (source)
2016-05-08 18:07:59 -0500 received badge  Popular Question (source)
2016-05-06 10:33:09 -0500 asked a question Instance will not launch. Neutron error on compute node.

I'm getting a weird error when I try to launch an instance. My current setup is a 2 node architecture with a Controller node and a Compute node. I am following the OpenStack manual installation guide for the Mitaka release. I have tested this setup in my VM lab and it was successful. Now when I move this setup to bare metal I am getting the following error on the compute node when I try to launch an instance. This error seems to crash the mariadb and rabbitmq services as it keeps looping about once every 2 seconds which then starts logging errors on the controller node. Any thoughts here? I am attempting to use the provider networking method as detailed here: http://docs.openstack.org/mitaka/networking-guide/scenario-provider-ovs.html


/var/log/neutron/openvswitch-agent.log on compute node

2016-05-06 11:12:07.548 1533 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-d7f19453-6bc5-47c1-af41-976d0e71ce2b - - - - -] Agent out of sync with plugin!
2016-05-06 11:12:07.549 1533 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-d7f19453-6bc5-47c1-af41-976d0e71ce2b - - - - -] Clearing cache of registered ports, retries to resync were > 5
2016-05-06 11:12:07.579 1533 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-d7f19453-6bc5-47c1-af41-976d0e71ce2b - - - - -] Port 'tap2dcb0b92-6e' has lost its vlan tag '1'!
2016-05-06 11:12:07.706 1533 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-d7f19453-6bc5-47c1-af41-976d0e71ce2b - - - - -] Port 2dcb0b92-6e60-49f6-93fb-f452a882b8e5 updated. Details: {u'profile': {}, u'network_qos_policy_id': None, u'qos_policy_id': None, u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': u'5971a3c5-1be7-4e88-a328-7407382eabee', u'segmentation_id': None, u'device_owner': u'compute:nova', u'physical_network': u'provider', u'mac_address': u'fa:16:3e:27:7e:6a', u'device': u'2dcb0b92-6e60-49f6-93fb-f452a882b8e5', u'port_security_enabled': True, u'port_id': u'2dcb0b92-6e60-49f6-93fb-f452a882b8e5', u'fixed_ips': [{u'subnet_id': u'ccf823a3-479a-4772-80fb-08dcfb2db761', u'ip_address': u'172.18.120.64'}], u'network_type': u'flat', u'security_groups': [u'd8eaadc7-a42c-48ec-8e41-4db46c11c8fa']}
2016-05-06 11:12:07.744 1533 INFO neutron.agent.securitygroups_rpc [req-d7f19453-6bc5-47c1-af41-976d0e71ce2b - - - - -] Preparing filters for devices set([u'2dcb0b92-6e60-49f6-93fb-f452a882b8e5'])
2016-05-06 11:12:07.898 1533 WARNING neutron.agent.linux.iptables_manager [req-d7f19453-6bc5-47c1-af41-976d0e71ce2b - - - - -] Tried to remove rule that was not there: 'PREROUTING' u'-m physdev --physdev-in tap2dcb0b92-6e -j CT --zone 1' True False
2016-05-06 11:12:07.899 1533 WARNING neutron.agent.linux.iptables_manager [req-d7f19453-6bc5-47c1-af41-976d0e71ce2b - - - - -] Tried to remove rule that was not there: 'PREROUTING' u'-m physdev --physdev-in tap2dcb0b92-6e -j CT --zone 1' True False
2016-05-06 11:12:07.902 1533 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-d7f19453-6bc5-47c1-af41-976d0e71ce2b - - - - -] Error while processing VIF ports
2016-05-06 11:12:07.902 1533 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last):
2016-05-06 11:12:07.902 1533 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1975, in rpc_loop
2016-05-06 11:12:07.902 1533 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent     port_info, ovs_restarted)
2016-05-06 11:12:07.902 1533 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1607, in process_network_ports
2016-05-06 11 ...
(more)
2016-05-06 10:17:30 -0500 commented answer DNS for OpenStack infrastructure.

Thank you.

2016-05-06 10:17:25 -0500 received badge  Commentator
2016-05-06 10:17:25 -0500 commented answer DNS for OpenStack infrastructure.

Thank you.

2016-05-06 10:17:19 -0500 received badge  Editor (source)
2016-04-27 15:27:49 -0500 commented answer Neutron networking for compute node(s).

Makes sense. Now configuring that will be the fun part!

2016-04-27 09:07:50 -0500 commented answer Neutron networking for compute node(s).

It is a private cloud so it is fine for my usage.

2016-04-27 09:07:14 -0500 commented answer Neutron networking for compute node(s).

Thank you for the diagram. Puts things into perspective for me. So does the eth1 interface in this diagram have an IP address? Or do you just let openvswitch take care of it through the neutron config?

2016-04-27 08:26:13 -0500 received badge  Famous Question (source)
2016-04-27 02:37:45 -0500 received badge  Notable Question (source)
2016-04-27 02:37:10 -0500 received badge  Famous Question (source)
2016-04-27 00:17:01 -0500 received badge  Notable Question (source)
2016-04-27 00:16:36 -0500 received badge  Popular Question (source)
2016-04-27 00:04:13 -0500 received badge  Popular Question (source)
2016-04-26 19:53:29 -0500 asked a question DNS for OpenStack infrastructure.

What is everyone doing for DNS for their OpenStack infrastructure nodes?

In the manual installation documentation on OpenStack's website it asks you to edit the /etc/hosts file on each node to configure the hostname for the management IP address. What is the best practice for a production environment? Are you guys adding DNS records for your nodes?

For example if we have an external network, a management network, and a storage network. Are you adding DNS records for each network or just for the management network? Trying to figure out the best practice here to avoid using IPs when possible.

Edit: Should have mentioned that I already knew about Designate. Just wanted to see what the consensus was about DNS.

2016-04-26 18:58:23 -0500 asked a question Neutron networking for compute node(s).

Hello,

Let's assume that I have the following setup for my test OpenStack:

  • 10.0.0.0/24 - Public Network (used for floating IPs) Gateway is 10.0.0.1
  • 192.168.0.0/24 - Management Network (and everything else really) Gateway is 192.168.0.1

Now I currently have two nodes as such:

Controller Node

  • eth0: 192.168.0.40
  • eth1 (br-ex): 10.0.0.40

Compute Node:

  • eth0: 192.168.0.41
  • eth1: ??

My question to you is, do I need to configure eth1 on the Compute Node as a bridge similar to br-ex on the Controller Node? Do I even need the Compute Node to be connected to the Public Network or can it pass the networking traffic over the Management Network? What is the best practice here as I cannot seem to wrap my head around this? The goal here is to begin to segregate the network traffic for performance and security reasons. Any advice?

2016-04-26 18:42:20 -0500 commented answer Openstack Manual Installation (CentOS/RHEL) Network Questions

This answer helped me a lot. I didn't have the openvswitch nor the neutron-ovs-cleanup service started. I was able to get this working in my test lab. Thank you.

2016-04-26 18:40:39 -0500 received badge  Scholar (source)
2016-03-13 10:32:10 -0500 commented answer Openstack Manual Installation (CentOS/RHEL) Network Questions

Thank you for the detailed reply. If I wanted to use LACP would I have to set that up in the system configuration file (ifcfg-eth0) or is it simply not possible with OVS?

2016-03-13 10:28:20 -0500 commented answer Openstack Manual Installation (CentOS/RHEL) Network Questions

Thank you for then informative links.

2016-03-13 10:27:31 -0500 commented answer Openstack Manual Installation (CentOS/RHEL) Network Questions

So are you using the openvswitch_agent.ini file to create the bridges for you? When you restart the service it creates the bridges? Also, I meant to say what's the process involved for having the management network and external network on different NICs.

2016-03-12 10:05:45 -0500 received badge  Famous Question (source)
2016-03-12 01:34:39 -0500 received badge  Notable Question (source)
2016-03-11 23:40:29 -0500 received badge  Popular Question (source)
2016-03-11 14:58:55 -0500 asked a question Openstack Manual Installation (CentOS/RHEL) Network Questions

I have been using packstack to setup and test different Openstack environment types including all-in-one, separate controller and compute nodes, and an all-in-one with a separate storage appliance (Nexenta). Now the goal here is to make a production ready Openstack private cloud to use for testing so I would like to get away from using packstack (unless you can convince me that packstack is production ready). I have been following the Installation Guide for CentOS/RHEL for Liberty and I have become stuck at the networking portion. Now I want to use OpenvSwitch as the ML2 mechanism here and the documents don't seem to have any information about the process of creating the needed br-int, br-tun, and br-ex bridges that packstack would have created. I have already checked the Networking Guide and the Operations guide with no luck. Also it is very important to me to have different networks on different interfaces such as management traffic on eth0, external network on eth1, and any other networks needed. I am open to using a provider network or allowing self service networks (without FWaaS and LBaaS). I plan on converting this all into Chef for my specific environment.

My questions are:

1) How can I configure OpenvSwitch during a manual installation of Openstack without packstack?

2) Do I need to create these br-* bridges or am I seeing this wrong?

3) How can I set this up across different interfaces instead of all on the "management" network like in the Install Guide?

4) Is there a situation when I would not need one of the br-ex, br-int, br-tun interfaces?

2016-03-03 11:32:33 -0500 commented answer Failed to start OpenStack Image Service (code-named Glance) API server.

This worked for me following the latest Liberty documentation.

2016-03-03 11:31:29 -0500 commented answer keystone HTTP 500 error

This answer worked for me when following the latest Liberty documentation. I didn't have to change any db passwords or whatnot. This should be the accepted answer.